If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?
If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?
Setting the theme in the first minute of the presentation, Google kicked off its annual developer conference with the phrase “make good things together.” This was present in nearly every segment of the presentation, whether it was about Google facing challenges to better the world or giving everyone the tools to do it themselves.
Here are four key things that caught our attention and that we believe will have the greatest impact on business in the next year: Google Assistant, App Actions, App Slices, and ML Kit (click any topic to jump to it in the blog). There were, of course, more than those four topics that piqued our interest, but those are the ones that will impact businesses the most in the coming year. Keep in mind that several of these are sneak peeks and some will have many more features and capabilities in the coming months – so, be sure to check back in for more information as it is released.
Making its debut two years ago, Google Assistant is much more than it was when it started. At I/O today, three new features were announced for Assistant that could truly give it the edge over Alexa and Siri (and Cortana I guess). Those features are Continued Conversation, Multiple Actions, and improved interactions with Google Assistant Apps (the really big one).
Continued Conversation allows for Assistant to continue providing answers without having to prompt each question with an “Okay Google”. Once a user has completed the conversation at hand, a simple “thank you” will end the interaction and kill the mic. This move allows Assistant to understand what’s conversational and what is a request as well.
Multiple Actions sounds simple but is extremely complex. Simply put, this allows the user to say things such as “what time is it and how long will it take me to get to work?” and get answers to both questions without having to ask them individually to Assistant.
Google Assistant Apps have some new capabilities as well. To get ready for Smart Displays, Google gave Assistant a visual overhaul. Now, information is displayed full screen and can include additional visuals such as videos and images. eCommerce applications can benefit greatly from the visual overhaul as the transition into Assistant Apps is much easier and more natural for the user. Previously, a user had to request to be connected to a Google Assistant App, but now a simple request such as “order my usual from Starbucks” will take the user directly into the Starbucks Assistant App. Seen above, the user can quickly and easily select additional menu items to include in their order via the new visual interface mentioned before. From first request to completed order, this interaction will likely involve fewer steps for the user than going directly into the Starbucks app (given it’s not on the user’s home screen).
Suggestions already appear below the Google search bar while typing. Soon, suggested actions will begin to appear as well. This might not sound like much, but imagine someone is searching for a particular product, like laundry detergent, the Walmart app could prompt an App Action to “add to my grocery order” for pickup later.
As shown above, Google provided an example of searching for Infinity War. When the user searched for it, they were prompted with options to buy tickets or watch the trailer. This is a great example of a contextual interface, but this doesn’t just happen like magic. Apps needs to be optimized to allow for this type of interaction.
In this example, Google has placed App Actions in the launch menu. The suggestions are based off of your everyday behavior. In this instance, it is suggesting the presenter call Fiona, as he usually does at this time of day, or continue listening to the song he last listened to since his headphones are connected.
Similar to App Actions, App Slices also appear in search. But there is a difference. Instead of simply suggesting an action, App Slices use the functionality of an app to display information in search. It can present a clip of a video, allow the user to check in to a hotel, or even show photos from the user’s last vacation.
In the example shown here, simply searching “Lyft” brings up the suggested routes in the Lyft app and displays the cost of the trip as well. We’ll learn more about what App Slices are available soon, so be sure to check back to learn more about the potential benefits of this innovation.
Part of Firebase, ML Kit (Machine Learning Kit) now offers a range of machine learning APIs for businesses to leverage. Instead of having to build custom ML algorithms for anything and everything, optimize for mobile usage, and then train with hundreds, or preferably thousands of samples, Google now provides “templates” for some common business needs.
Leveraging TensorFlowLite and available on both Android and iOS, ML Kit will make it easier to integrate image labeling, text recognition, face detection, barcode scanning and more. It can even be used to build custom chatbots.
There were plenty of other announcements in the keynote and even more on their way as the week goes on. For instance, right after the keynote, we found out that Starbucks had nearly as many orders come through its PWA than via its mobile app. We learned that Google Assistant can now make phone calls to schedule appointments – without the customer service representative realizing it’s a computer. Google announced a new Shush mode to completely eliminate notifications when a phone is upside down on a table, and a lot more.
Even among the four topics covered in this recap, there is more information to come as the week goes on. We’ll dive into each as we get more information back from our Rocketeers in California, so be sure to check back in a couple of days.
For the final day of Google I/O, we wanted to take a moment and share a few favorite updates and pieces of technology that our developers saw during their final day of Google I/O.
Pokémon Go was an interesting case in human behavior. It spread like wildfire and quickly resulted in countless news stories of individuals getting hit by cars, falling off motorcycles, climbing into active construction zones and more (example 1, 2, 3, 4, 5, 6… you get the idea). Now, Google has a solution for that too. Android Engineer, Chris Koeberle, stumbled across this limited-release project Google has been working on. To avoid another craze like Pokemon Go, Google is working to create “safe-zones” where events can occur in GPS-based games, like public parks and malls. They’re also working to cut down development time by making it easier to skin Google Maps using Unity. Again, this is not available to everyone, but we don’t expect it to stay that way forever.
With so many platforms, app integrations, and more appearing these days, it’s hard to know which ones are truly reliable. Whether they lose support in months or are rife with bugs, many are extremely skeptical about these services. However, Senior Technical Architect, Jonathan Campos, wanted to be sure this isn’t applied to every service out there. “One of the worst rumors plaguing companies is that Firebase isn’t a ‘real’ platform. Rumblings of scalability and security that ‘true’ developers desire isn’t available with Firebase – but none of this has any credible backing. It may have been true when it released, but it shouldn’t be grouped with these bad actors any longer. Firebase is different. Firebase is secure. Firebase is scalable. It can support projects on a global scale and is up-to-date with the latest security standards. It is really impressive how much you can do if you just make the leap.”
Powered by IoT Core, Android Things has more support than ever. To put it simply, Android Things is a suite of components and devices that play nice with the Android ecosystem. Instead of fighting to have your little device communicate with your phone, you can spend the bulk of your time making the magic happen.
Director of Android Engineering, Luke Wallace, snapped this photo for us. As it says on the plaque, “This is a gimbal stabilizer built with a raspberry pi and taught with Google’s TensorFlow Lite Machine Learning technology.” If you’re unfamiliar, gimbal stabilizers rotate and turn cameras to keep it focused on a particular object. In this case, the ML modal learned which direction it would need to twist the camera to keep the subject in view. Another awesome Android Things project on display was a sentinel that can monitor your house while you’re away.
All good things must come to an end, and so must Google I/O 2018. Before we close out the week, we’d like to take a moment to thank our Rocketeers for keeping us informed on the latest and greatest from Google.
Be sure to revisit over the next few weeks as we take a deep dive into the new technologies offered by Google. We’ll be looking at how Google’s efforts are improving the human experience, changing how users interact with technology, and how businesses can harness these innovations to improve their own projects and offerings.
If you cannot wait until then, contact us today to learn more about the changes coming to the Android ecosystem, Google’s new Machine Learning and Cloud technologies, and how all of these can improve the way businesses serve their customers.
While a lot of these might not be as flashy as an AI that makes phone calls, there’s no shortage of new and/or updated tools for developers to leverage in the coming year. Here’s what caught the attention of our Rocketeers during the second day of Google’s annual developer conference.
Developers can’t just write code and expect it to run perfectly, it needs to be tested – the more often, the better. Back in the day (like last week) developers had to decide whether they wanted to run it on the machine they’re using for development or on a device. Now, they can simply choose which they would like to test, and Jetpack will take care of the rest. The Jetpack Test will even simulate the conditions an Android device “in the wild” would face to make the test more accurate.
For many companies, Angular is the basis for the majority of their Web Applications. As the reigning king of Web Applications, new features and improvements directly correspond to improvements in applications. Here are a few features that Senior Technical Architect, Jonathan Campos, found to be the most exciting:
Schematics – Customize the generated code for an application; improves development speed.
Angular Universal – A way to render out Angular applications at first request by a user; improves first draw speed and user experience.
Angular Elements – Allows rendering of Angular components without needing to include the entire Angular framework on a webpage.
Ivy Renderer – This remarkable change in rendering can both reduce bundle sizes and improve the initial load time of an application by removing unused code and only compiling the necessary code that changed between releases.
The Internet of Things (IoT) continues to grow, but it’s not getting any easier to manage – until now. Google’s new IoT management tool, Cloud IoT Core, will make it much easier to manage, connect, and grow IoT ecosystems that seamlessly connect to Google’s Cloud Platform (GCP). It’s not just the development that’s streamlined; analytics are also more manageable than ever.
An underrated feature hidden in Chrome, Lighthouse helps web developers pinpoint areas for optimization to increase the performance of websites. As of I/O 2018, Google is expanding on its feature set. One feature that will help companies the most in monitoring their sites is the added Lighthouse API. This way, businesses can integrate diagnostics right into their Continuous Integration and Continuous Delivery pipelines.
Google Photos is great on its own, but it doesn’t play well with others. As users take photos, Google Photos is great for backing up and indexing those images. However, finding an image in another app, like a photo editor, can result in minutes of searching through device folders. Yesterday, Google introduced a developer API for Google Photos. This will allow user-permitted apps to search through images directly or by using categories like “documents” or “selfies” as a filter. Director of Android Engineering, Luke Wallace, had this to say about the new API: “Imagine picking a profile photo by just seeing your last 10 selfies in Google Photos, it would be so much quicker than it is today! The API allows for basic filtering of photos, adding photos to Google Photos, creating albums, and even enhancing the albums with more information around the photos like descriptions and map views.”
Unless an app has a singular purpose, it’s going to need a menu – among other things. Instead of starting from scratch each time, Google has made it easier than ever to edit UI components for Material Design in the support library. Now, instead of having to reinvent the wheel every time you want a custom interface, you can start with something that resembles a wheel and modify where you see fit. The best part? It’s not just for apps! The support library provides UI components that can be used for Android, iOS, web, Flutter and React.
A lot happens behind-the-scenes when actively using an app and when it’s running in the background. Ever put your phone in your pocket and it seemed unusually warm? It’s probably due to an unoptimized app ravaging your CPU. That’s why Google made WorkManager. With this nifty tool, developers have more visibility into solutions for background work – which will ultimately help developers make more battery-friendly apps.
While this may seem like a lot, this is just the tip of the iceberg. It seems no service, tool, or platform was left untouched this year. What’s even more astonishing is that Google has more releases on the way. Some of the updates we’re learning about are just now being released to the public and some aren’t even out yet. So, be sure to check in every now and then as we explore even more of these new features and services from Google.
Until then, if you’d like to hear more about the updates coming to Android and how Google’s services can improve both your iOS and Android applications, contact us today.
Google’s goal this year seems to be quality-of-life updates – both for developers and for the people who will ultimately use the products and services created by developers. Since Google is leaning heavily on Machine Learning (ML) to accomplish this goal, talk of ML and AI permeated the entire conference. The second most important theme of the day was simplification (and ML is helping with that too).
Once developers had a chance to learn more about the technologies originally discussed during the keynote, a trend emerged – Firebase was everywhere. Firebase had previously been used for a few things in app development, like crash reporting and user authentication through other web services such as Facebook and Twitter, but this year Google has made it a tool that every developer should be using. As we mentioned in our Google I/O 2018 keynote recap, Google added several new ML models within Firebase to make applications not only functional, but smart. This isn’t where the Machine Learning stops though. Google also added several new features to help with communication, overall application health, and data management. Senior Technical Architect, Jonathan Campos, believes the services in Firebase are so powerful that this could likely be the way most companies will implement Machine Learning in their applications for the foreseeable future.
Another Google-backed technology facing a revival is the Progressive Web App or PWA. Introduced a few years ago, PWAs are basically light-weight versions of apps that live on the web. This year PWAs were front and center, featuring native integration with Chrome OS along with a host of new Lighthouse tools to create more actionable guidance developers.
Google also stepped up and added a host of new features and development best practices for Android. A key resource to aid in this is Android Jetpack – a set of libraries, tools, and architectural guidance to make it easier to build great Android applications.
Digital well-being is not a new idea, but Google is using it to help drive some of their initiatives. The Dashboard in Android P will help users understand where their time is going, and even encourage them to stop using apps that eat up too much time. For those that want to disconnect more, it could make things a lot easier by providing a little external force that can drive behavior change. But it’s not just a person’s time and energy that Android P is going to help with – it’s also going to help extend the time their phone has energy. When monitoring what and when apps are used to help users disconnect, Google will also be using ML to know when to close or block apps from operating in the background, ultimately extending the battery life of Android devices.
Carrying along with the theme of simplicity, Android P features a few new UI adjustments as well. For instance, many interactions will now occur at the bottom of the phone – where it is easiest to access them with one hand. Other UI adjustments include gestures in place of button presses, icons such as the "back" button only appearing when they can be used, or the rotation icon when the phone is turned 90 degrees.
There are other products aside from Android P that received updates users will enjoy as well. For instance, Android TV got an overhauled setup process that reduced setup time by about a third. Thanks to ML, Android TV will also predict what settings users are looking for as well. Google also updated ARCore to version 1.2 and announced Cloud Anchors. As one of our Android Engineers, Chris Koeberle, put it, “Cloud Anchors were the missing element to make it easy to create immersive multiperson augmented reality experiences. Being able to create an AR app that allows people to not just experience but collectively modify a virtual world - on Android and iOS - is going to open up possibilities we can't even imagine yet.”
“As a developer, I don’t focus on what something does, but what it enables me to do. Today Google enabled me to do a lot – specifically around Google Assistant, application development with Firebase and Machine Learning, web application quality with PWAs, and improvements to the Android ecosystem."
Jonathan Campos, Senior Technical Architect
With this many announcements made on day one, it’s hard to imagine what Google has in store for the rest of the week. But, that’s why we have developers on the scene to tell us what they discover, as they discover it. Be sure to check in tomorrow to hear what they stumble upon next.
In the meantime, contact us for more information about the changes coming in Android P. As one of the select Google Certified Agencies, are privy to detailed information, beta releases, and direct access to in-house Google developers, unlike others without this elite certification. With so many changes on their way, it’s more important than ever to build out your digital roadmap, and we can help.
© 2020 Bottle Rocket. All Rights Reserved.