May 9, 2019

Two To-Do’s and a Word of Caution for Brands from Google I/O 2019

If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?

Read more

June 4, 2018

The Most Important Feature for Brands from the WWDC 2018 Keynote

Every year, developers make the Great Migration to Apple’s Worldwide Developer Conference (WWDC). This week-long event is where Apple announces all the latest tech and tools coming to the suite of Apple products. Like Google I/O, WWDC kicks off with a keynote that hints at all the things that are to come throughout the rest of the week.

This year, it seems many tech companies are focusing on “quality of life” (QOL). Usually the phrase “quality of life update” refers to a software update that makes many changes to an application or game to improve the overall experience – usually a combination of bug fixes, interface tweaks, performance enhancements, and anything else that improves someone’s experience with a particular piece of software. However, most recently, we’ve noticed more and more emphasis being placed on the user’s QOL rather than the software. Both Apple and Google have released features to help users spend less time on their phone and more time with those around them. Digital Health is not a new concept, but it does seem to have gone by the wayside in recent years. Although not the one key takeaway that we chose to highlight in this article (but it was a close second), we would be remiss if we didn’t mention the hot topic of app optimization. Quite a bit of time was spent covering how developers could and should optimize apps in every way possible – in file size, performance, and amount of time users need to spend in it to accomplish the desired task (which you should be doing anyway).

Speaking of QOL, Apple spent a majority of the keynote announcing new features for their apps and devices. Things like Search Suggestions for photos, updates and UI changes for several first-party apps, new workouts on the Apple Watch, and much more. They also announced that you could FaceTime with 32 people while using your own emoji, aptly named Memojies (below).

Tim Cook and other Apple employees using Facetime and the new memojies

A majority of these updates benefited the ultimate end users of Apple devices while some helped developers more easily and effectively build on Apple’s platforms. There was, however, one update that stood out above the rest as the “killer feature” for apps this year. And that feature, is Siri Shortcuts.

Siri Shortcuts

These. Are. Big. Siri Shortcuts will change how a lot of people interact with a lot of apps. Since the emergence of DVAs (Digital Voice Assistants), the biggest barrier for adoption has been the learning curve for users. “What can I ask it? Was it how I phrased it? I didn’t want it to open that app to do ____.” are all statements you may have muttered to yourself when trying to communicate with your Google Home, Home Pod, or Amazon Echo. But Siri Shortcuts are going to change that. Instead of adding voice-controlled features to an app that users may or may not ever discover, developers can now prompt users with a button to “Add to Siri.” This does not add a particular action to Siri­­, but instead it allows users to create their own custom phrase to activate a certain feature that the app allows. For example, instead of having to say “Hey Siri, play my ‘Running’ playlist in Spotify,” someone can create a custom phrase for “Hey Siri, I’m going on a run” and the outcome will be the same.

arrow pointing to an iPhone X displaying Siri Shortcut

This doesn’t sound like much, but this could change Siri’s role to many as a peripheral accessory of the iPhone to an app necessity. Instead of having to try several times to get a request to work, users can simply make their own. As we aren’t exactly sure on how this will work just yet, we are assuming it will be based on deep linking.

example actions in a Siri Shortcut series

Another reason apps need to be Siri-ready is that Shortcuts will not just be for individual actions, but for a series of actions. Seen above, when asked “how’s the surf,” Siri began running through the requests the user had previously set up – like checking the weather and getting directions to the beach. Other examples Apple provided were Siri Shortcuts for “time to go home” or “let’s go to work.” In the “let’s go to work” example, Siri automatically knew to order a coffee from Starbucks that the user gets on the way to the office every day. So, for example, if your brand allows pick-up for groceries, you may want to integrate Siri in a way that allows people to create a grocery list of common items they need each week so users can order with a simple phrase.

Platform State of the Union keynote slide explaining the best uses for Siri Actions

By creating useful Siri integrations that can become part of a larger, daily/weekly/monthly routine instead of a one-off request, branded apps can quickly become a necessity of life even if they aren’t being manually launched. Like in the example above, the user with the morning routine didn’t open the Starbucks app, but they still bought a coffee.

Stay tuned for more from Apple’s developer conference or contact us today to learn more about Siri Shortcuts and how your brand can best leverage them.

May 12, 2018

4 Things Businesses Need to Know from Google I/O 2018

It’s May, and you know what that means! Ok, maybe you don’t – it’s time for Google I/O! Every year, thousands of developers flood Mountain View, California to learn about the latest innovations and announcements from the software giant. The event kicked off today with its inaugural keynote presentation, and while many of us watched from our office in Dallas, a few lucky Rocketeers were onsite to experience the action firsthand. Over the course of the next several days, we will share insider details, perspectives, and helpful recaps of what we uncover…starting with today’s keynote.

Make Good Things Together

Setting the theme in the first minute of the presentation, Google kicked off its annual developer conference with the phrase “make good things together.” This was present in nearly every segment of the presentation, whether it was about Google facing challenges to better the world or giving everyone the tools to do it themselves.

Here are four key things that caught our attention and that we believe will have the greatest impact on business in the next year: Google Assistant, App Actions, App Slices, and ML Kit (click any topic to jump to it in the blog). There were, of course, more than those four topics that piqued our interest, but those are the ones that will impact businesses the most in the coming year. Keep in mind that several of these are sneak peeks and some will have many more features and capabilities in the coming months – so, be sure to check back in for more information as it is released.

1. Google Assistant

Making its debut two years ago, Google Assistant is much more than it was when it started. At I/O today, three new features were announced for Assistant that could truly give it the edge over Alexa and Siri (and Cortana I guess). Those features are Continued Conversation, Multiple Actions, and improved interactions with Google Assistant Apps (the really big one).

Continued Conversation allows for Assistant to continue providing answers without having to prompt each question with an “Okay Google”. Once a user has completed the conversation at hand, a simple “thank you” will end the interaction and kill the mic. This move allows Assistant to understand what’s conversational and what is a request as well.

Multiple Actions sounds simple but is extremely complex. Simply put, this allows the user to say things such as “what time is it and how long will it take me to get to work?” and get answers to both questions without having to ask them individually to Assistant.

Google Assistant showing Starbucks menu items in Android P

Google Assistant Apps have some new capabilities as well. To get ready for Smart Displays, Google gave Assistant a visual overhaul. Now, information is displayed full screen and can include additional visuals such as videos and images. eCommerce applications can benefit greatly from the visual overhaul as the transition into Assistant Apps is much easier and more natural for the user. Previously, a user had to request to be connected to a Google Assistant App, but now a simple request such as “order my usual from Starbucks” will take the user directly into the Starbucks Assistant App. Seen above, the user can quickly and easily select additional menu items to include in their order via the new visual interface mentioned before. From first request to completed order, this interaction will likely involve fewer steps for the user than going directly into the Starbucks app (given it’s not on the user’s home screen).

2. App Actions

Suggestions already appear below the Google search bar while typing. Soon, suggested actions will begin to appear as well. This might not sound like much, but imagine someone is searching for a particular product, like laundry detergent, the Walmart app could prompt an App Action to “add to my grocery order” for pickup later.

App Actions example from Google I/O 2018 keynote

As shown above, Google provided an example of searching for Infinity War. When the user searched for it, they were prompted with options to buy tickets or watch the trailer. This is a great example of a contextual interface, but this doesn’t just happen like magic. Apps needs to be optimized to allow for this type of interaction.

Headphones and smartphone showcasing App Actions

In this example, Google has placed App Actions in the launch menu. The suggestions are based off of your everyday behavior. In this instance, it is suggesting the presenter call Fiona, as he usually does at this time of day, or continue listening to the song he last listened to since his headphones are connected.

3. App Slices

Similar to App Actions, App Slices also appear in search. But there is a difference. Instead of simply suggesting an action, App Slices use the functionality of an app to display information in search. It can present a clip of a video, allow the user to check in to a hotel, or even show photos from the user’s last vacation.

App Slices showcased at Google I/O 2018

In the example shown here, simply searching “Lyft” brings up the suggested routes in the Lyft app and displays the cost of the trip as well. We’ll learn more about what App Slices are available soon, so be sure to check back to learn more about the potential benefits of this innovation.

4. ML Kit

Part of Firebase, ML Kit (Machine Learning Kit) now offers a range of machine learning APIs for businesses to leverage. Instead of having to build custom ML algorithms for anything and everything, optimize for mobile usage, and then train with hundreds, or preferably thousands of samples, Google now provides “templates” for some common business needs.

ML Kit and templates shown at I/O 2018

Leveraging TensorFlowLite and available on both Android and iOS, ML Kit will make it easier to integrate image labeling, text recognition, face detection, barcode scanning and more. It can even be used to build custom chatbots.

But That’s Not All

There were plenty of other announcements in the keynote and even more on their way as the week goes on. For instance, right after the keynote, we found out that Starbucks had nearly as many orders come through its PWA than via its mobile app. We learned that Google Assistant can now make phone calls to schedule appointments – without the customer service representative realizing it’s a computer. Google announced a new Shush mode to completely eliminate notifications when a phone is upside down on a table, and a lot more.

Even among the four topics covered in this recap, there is more information to come as the week goes on. We’ll dive into each as we get more information back from our Rocketeers in California, so be sure to check back in a couple of days.

May 11, 2018

Google I/O 2018 Rocketeer Recap  – Day 3

If you’re reading this, Google I/O is now over. Mountain View may be quiet once more, but the excitement from Google I/O is far from over. Sure, the conference is now said and done, but the learnings continue. Google records nearly every single session for developers as it is near impossible to attend every one that may be of interest. So, our developers will continue to search through the depths of Google’s resources to find the latest and greatest to bring to our clients.

For the final day of Google I/O, we wanted to take a moment and share a few favorite updates and pieces of technology that our developers saw during their final day of Google I/O.

Google Maps APIs for Gaming

Pokémon Go was an interesting case in human behavior. It spread like wildfire and quickly resulted in countless news stories of individuals getting hit by cars, falling off motorcycles, climbing into active construction zones and more (example 1, 2, 3, 4, 5, 6… you get the idea). Now, Google has a solution for that too. Android Engineer, Chris Koeberle, stumbled across this limited-release project Google has been working on. To avoid another craze like Pokemon Go, Google is working to create “safe-zones” where events can occur in GPS-based games, like public parks and malls. They’re also working to cut down development time by making it easier to skin Google Maps using Unity. Again, this is not available to everyone, but we don’t expect it to stay that way forever.

Firebase at Work

With so many platforms, app integrations, and more appearing these days, it’s hard to know which ones are truly reliable. Whether they lose support in months or are rife with bugs, many are extremely skeptical about these services. However, Senior Technical Architect, Jonathan Campos, wanted to be sure this isn’t applied to every service out there. “One of the worst rumors plaguing companies is that Firebase isn’t a ‘real’ platform. Rumblings of scalability and security that ‘true’ developers desire isn’t available with Firebase – but none of this has any credible backing. It may have been true when it released, but it shouldn’t be grouped with these bad actors any longer. Firebase is different. Firebase is secure. Firebase is scalable. It can support projects on a global scale and is up-to-date with the latest security standards. It is really impressive how much you can do if you just make the leap.”

Android Things

Powered by IoT Core, Android Things has more support than ever. To put it simply, Android Things is a suite of components and devices that play nice with the Android ecosystem. Instead of fighting to have your little device communicate with your phone, you can spend the bulk of your time making the magic happen.

tensorflow gimbal demo

Director of Android Engineering, Luke Wallace, snapped this photo for us. As it says on the plaque, “This is a gimbal stabilizer built with a raspberry pi and taught with Google’s TensorFlow Lite Machine Learning technology.” If you’re unfamiliar, gimbal stabilizers rotate and turn cameras to keep it focused on a particular object. In this case, the ML modal learned which direction it would need to twist the camera to keep the subject in view. Another awesome Android Things project on display was a sentinel that can monitor your house while you’re away.

Closing out Google I/O

All good things must come to an end, and so must Google I/O 2018. Before we close out the week, we’d like to take a moment to thank our Rocketeers for keeping us informed on the latest and greatest from Google.

Rocketeers stand in front of Google I/O statue

The Best is Yet to Come

Be sure to revisit over the next few weeks as we take a deep dive into the new technologies offered by Google. We’ll be looking at how Google’s efforts are improving the human experience, changing how users interact with technology, and how businesses can harness these innovations to improve their own projects and offerings.

If you cannot wait until then, contact us today to learn more about the changes coming to the Android ecosystem, Google’s new Machine Learning and Cloud technologies, and how all of these can improve the way businesses serve their customers.

May 10, 2018

Google I/O 2018 Rocketeer Recap – Day 2: A Day for Developers

Google shocked the world with its appointment-booking, conversational AI, Assistant Duplex, when the conference began. They covered new UI elements in Android P, unveiled their new TPU 3.0 servers that power their Machine Learning and Artificial Intelligence, and announced Google Assistant will help teach children to say "please" and free up time on your calendar. Believe it or not, all of these announcements were made in the first two hours of the conference. So, what could Google possibly have in store the next two days of I/O 2018? We’re glad you asked.

While a lot of these might not be as flashy as an AI that makes phone calls, there’s no shortage of new and/or updated tools for developers to leverage in the coming year. Here’s what caught the attention of our Rocketeers during the second day of Google’s annual developer conference.

Android Jetpack

Developers can’t just write code and expect it to run perfectly, it needs to be tested – the more often, the better. Back in the day (like last week) developers had to decide whether they wanted to run it on the machine they’re using for development or on a device. Now, they can simply choose which they would like to test, and Jetpack will take care of the rest. The Jetpack Test will even simulate the conditions an Android device “in the wild” would face to make the test more accurate.

Angular

For many companies, Angular is the basis for the majority of their Web Applications. As the reigning king of Web Applications, new features and improvements directly correspond to improvements in applications. Here are a few features that Senior Technical Architect, Jonathan Campos, found to be the most exciting:

Schematics – Customize the generated code for an application; improves development speed.

Angular Universal – A way to render out Angular applications at first request by a user; improves first draw speed and user experience.

Angular Elements – Allows rendering of Angular components without needing to include the entire Angular framework on a webpage.

Ivy Renderer – This remarkable change in rendering can both reduce bundle sizes and improve the initial load time of an application by removing unused code and only compiling the necessary code that changed between releases.

Core IoT

The Internet of Things (IoT) continues to grow, but it’s not getting any easier to manage – until now. Google’s new IoT management tool, Cloud IoT Core, will make it much easier to manage, connect, and grow IoT ecosystems that seamlessly connect to Google’s Cloud Platform (GCP). It’s not just the development that’s streamlined; analytics are also more manageable than ever.

Lighthouse

An underrated feature hidden in Chrome, Lighthouse helps web developers pinpoint areas for optimization to increase the performance of websites. As of I/O 2018, Google is expanding on its feature set. One feature that will help companies the most in monitoring their sites is the added Lighthouse API. This way, businesses can integrate diagnostics right into their Continuous Integration and Continuous Delivery pipelines.

Photos

Google Photos is great on its own, but it doesn’t play well with others. As users take photos, Google Photos is great for backing up and indexing those images. However, finding an image in another app, like a photo editor, can result in minutes of searching through device folders. Yesterday, Google introduced a developer API for Google Photos. This will allow user-permitted apps to search through images directly or by using categories like “documents” or “selfies” as a filter. Director of Android Engineering, Luke Wallace, had this to say about the new API: “Imagine picking a profile photo by just seeing your last 10 selfies in Google Photos, it would be so much quicker than it is today! The API allows for basic filtering of photos, adding photos to Google Photos, creating albums, and even enhancing the albums with more information around the photos like descriptions and map views.”

Support Library

Unless an app has a singular purpose, it’s going to need a menu – among other things. Instead of starting from scratch each time, Google has made it easier than ever to edit UI components for Material Design in the support library. Now, instead of having to reinvent the wheel every time you want a custom interface, you can start with something that resembles a wheel and modify where you see fit. The best part? It’s not just for apps! The support library provides UI components that can be used for Android, iOS, web, Flutter and React.

WorkManager

A lot happens behind-the-scenes when actively using an app and when it’s running in the background. Ever put your phone in your pocket and it seemed unusually warm? It’s probably due to an unoptimized app ravaging your CPU. That’s why Google made WorkManager. With this nifty tool, developers have more visibility into solutions for background work – which will ultimately help developers make more battery-friendly apps.

Closing out Day 2

While this may seem like a lot, this is just the tip of the iceberg. It seems no service, tool, or platform was left untouched this year. What’s even more astonishing is that Google has more releases on the way. Some of the updates we’re learning about are just now being released to the public and some aren’t even out yet. So, be sure to check in every now and then as we explore even more of these new features and services from Google.

Until then, if you’d like to hear more about the updates coming to Android and how Google’s services can improve both your iOS and Android applications, contact us today.

Bottle Rocket Logo
digital transformation redefined.

Bottle Rocket is a digital transformation company that leverages a unique Connected Customer mindset to build technology-enabled solutions that propel businesses and delight customers.

© 2019 Bottle Rocket. All Rights Reserved.