November 10, 2020

Server-Driven UI for Android with Jetpack Compose

Jetpack Compose is a new UI toolkit for native Android UI development that enables declarative UI. Traditional Android UI development is done either using a markup language to create and style native components, or with imperative Kotlin statements. With its declarative Domain-Specific Language (DSL) Jetpack Compose allows efficient development of UI with compact, easy-to-read statements.

One exciting capability of this new toolkit is the ability to more closely couple the UI with the business logic. With a traditional Android app, the entire presentation layer is deployed as part of the application. If the appearance of the app needs to change, a new version of the app must be deployed. We often struggle with the desire to build apps in such a way that we can make changes on the server and have them immediately reflected on the user’s device.

In the past, the most efficient way to achieve this has been to embed web pages within the app, but this requires a number of sacrifices. Because the web page rendering is mediated through a WebView, integrating the web and native pages can be a struggle. By developing with Compose, we can build components into our native UI that are a direct reflection of the endpoint results. This gives us much greater control over the appearance and behavior of our app without redeploying.

There are a great number of strategies for this, depending on the amount of control we hope to exercise remotely. Here, we present an example of a technique that directly renders API results in native screens. The focus here is on presenting the far end of the spectrum, where the server totally drives the UI, including building in server callbacks to submit form selections. The complete sample is available on GitHub.

A Simple Form

A form with two text fields

All of the UI in this form, displayed in an Android app, was generated from this JSON:

{
     "children" : [
         {
             "viewtype" : "TEXT",
             "label" : "Form Header"
         },
         {
             "viewtype" : "FORM",
             "children" : [
                 {
                     "viewtype" : "TEXT",
                     "label" : "Personal Information"
                 },
                 {
                     "viewtype" : "TEXTFIELD",
                     "label" : "First",
                     "data" : "first_name"
                 }, {
                     "viewtype" : "TEXTFIELD",
                     "label": "Last",
                     "data" : "last_name"
                 }
             ],
             "label" : "Submit",
             "data" : "/check"
         }
     ]
 }

To make it easier to follow, the objects are labeled with the type of view that they will produce. The screen root is a Column view that presents its list of children, each of which is converted into a @Composable. For instance, this is the code that generates the First Name text input:

class TextFieldElement(val elementDto: ElementDto) : ComposableElement {
     val fieldName = elementDto.data?:"value"
     @Composable
     override fun compose(hoist: Map<String, MutableState<String>>) {
         TextField(value = hoist.get(fieldName)?.value?:"", onValueChange = {hoist.get(fieldName)?.value = it}, label = { Text (elementDto.label?:"") })
     }
 
     override fun getHoist(): Map<String, MutableState<String>> {
         return mapOf(Pair(fieldName, mutableStateOf(elementDto.default?:"")))
     }
 }

When we parse the JSON, we transform each element from a Data Transfer Object (DTO) to an object that can return a @Composable. When the element accepts input, it also generates the hoists necessary to access and act on that data at a higher level in the view hierarchy. Here, our submit button is able to retrieve the text from the text input fields, and pass it on to our server. (In this case, the server is actually a fake built into the app for ease of portability.)

Building the Application

Our MainActivity is extremely small, because all it does is ask the server for the screen we will render. All the activity onCreate does is instantiate our base @Composable with the app theme:

setContent {
     MyApplicationTheme {
         MyScreenContent()
     }
 }

Our @Composable has an external holder for the server JSON result that it provides as an Ambient to allow screen elements to trigger loading a new screen:

data class StringHolder(var held: MutableState<String>)
 val ScreenJson = ambientOf<StringHolder>()

And here is our main @Composable that does the work of loading the screen from JSON. We use Moshi here instead of kotlinx serialization because kotlinx serialization is currently incompatible with Jetpack Compose. A workaround exists that will work for many situations, by separating the DTOs into a different module, but because we are converting our DTOs directly into @Composable, this will not work for us.

@Composable
 fun MyScreenContent() {
     // Load initial API endpoint
     val screenJson = ServiceLocator.resolve(BackEndService::class.java).getPage("/", mapOf())
     // Create the holder that can be updated by other @Composables
     val screenJsonString = StringHolder(remember {mutableStateOf(screenJson)})
     val screenAdapter: JsonAdapter<ScreenDto> = ServiceLocator.resolve(JsonAdapter::class.java) as JsonAdapter<ScreenDto>
     Providers(ScreenJson provides screenJsonString) {
         val holder = ScreenJson.current
         screenAdapter
             .fromJson(holder.held.value)?.let {
                 Screen(it).compose()
             }
     }
 }

The FORM element in the JSON is the most customized element. It expects a data field which is the URL to which the form submissions will be passed. Each element that hoists data is responsible for identifying the key that it will be passed as, and these are sent along as a map.

Button(onClick = {
     val parameters = children.flatMap { it.second.entries.map { Pair(it.key, it.value.value)  } }.toMap()
     val newPage = ServiceLocator.resolve(BackEndService::class.java).getPage(elementDto.data?:"", parameters)
     json.held.value = newPage
 }){
     Text(elementDto.label?:"")
 }

Another Form

When the JSON text holder is updated at the Button level, it triggers a new compose phase at the top level, in MyScreenContent. The JSON is read:

{
     "children" : [
         {
             "viewtype" : "TEXT",
             "label" : "Form Header"
         },
         {
             "viewtype" : "FORM",
             "children" : [
                 {
                     "viewtype" : "TEXT",
                     "label" : "Checkboxes"
                 },
                 {
                     "viewtype" : "CHECKBOX",
                     "label" : "First",
                     "data" : "first_check"
                 }, {
                     "viewtype" : "CHECKBOX",
                     "label": "Last",
                     "data" : "last_check"
                 }
             ],
             "label" : "Submit",
             "data" : "/welcome"
         }
     ]
 }

And we display a new screen:

Form with two checkboxes

Moving On

Obviously, there is a lot of work to do to make this look polished. We can choose to do that work on the app side, by applying consistent styling to our building blocks and allowing the backend to compose them. We can also defer those decisions to the backend by allowing the backend to specify Modifier attributes that we will apply to each element.

This is just a small glimpse into a totally different style of app development. It will not be a great match for every project, but for projects with a high degree of control over the backend, and constantly evolving business logic, it can allow the Android app to seem as responsive as a webpage.

May 9, 2019

Two To-Do’s and a Word of Caution for Brands from Google I/O 2019

If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?

Read more

May 12, 2018

4 Things Businesses Need to Know from Google I/O 2018

It’s May, and you know what that means! Ok, maybe you don’t – it’s time for Google I/O! Every year, thousands of developers flood Mountain View, California to learn about the latest innovations and announcements from the software giant. The event kicked off today with its inaugural keynote presentation, and while many of us watched from our office in Dallas, a few lucky Rocketeers were onsite to experience the action firsthand. Over the course of the next several days, we will share insider details, perspectives, and helpful recaps of what we uncover…starting with today’s keynote.

Make Good Things Together

Setting the theme in the first minute of the presentation, Google kicked off its annual developer conference with the phrase “make good things together.” This was present in nearly every segment of the presentation, whether it was about Google facing challenges to better the world or giving everyone the tools to do it themselves.

Here are four key things that caught our attention and that we believe will have the greatest impact on business in the next year: Google Assistant, App Actions, App Slices, and ML Kit (click any topic to jump to it in the blog). There were, of course, more than those four topics that piqued our interest, but those are the ones that will impact businesses the most in the coming year. Keep in mind that several of these are sneak peeks and some will have many more features and capabilities in the coming months – so, be sure to check back in for more information as it is released.

1. Google Assistant

Making its debut two years ago, Google Assistant is much more than it was when it started. At I/O today, three new features were announced for Assistant that could truly give it the edge over Alexa and Siri (and Cortana I guess). Those features are Continued Conversation, Multiple Actions, and improved interactions with Google Assistant Apps (the really big one).

Continued Conversation allows for Assistant to continue providing answers without having to prompt each question with an “Okay Google”. Once a user has completed the conversation at hand, a simple “thank you” will end the interaction and kill the mic. This move allows Assistant to understand what’s conversational and what is a request as well.

Multiple Actions sounds simple but is extremely complex. Simply put, this allows the user to say things such as “what time is it and how long will it take me to get to work?” and get answers to both questions without having to ask them individually to Assistant.

Google Assistant showing Starbucks menu items in Android P

Google Assistant Apps have some new capabilities as well. To get ready for Smart Displays, Google gave Assistant a visual overhaul. Now, information is displayed full screen and can include additional visuals such as videos and images. eCommerce applications can benefit greatly from the visual overhaul as the transition into Assistant Apps is much easier and more natural for the user. Previously, a user had to request to be connected to a Google Assistant App, but now a simple request such as “order my usual from Starbucks” will take the user directly into the Starbucks Assistant App. Seen above, the user can quickly and easily select additional menu items to include in their order via the new visual interface mentioned before. From first request to completed order, this interaction will likely involve fewer steps for the user than going directly into the Starbucks app (given it’s not on the user’s home screen).

2. App Actions

Suggestions already appear below the Google search bar while typing. Soon, suggested actions will begin to appear as well. This might not sound like much, but imagine someone is searching for a particular product, like laundry detergent, the Walmart app could prompt an App Action to “add to my grocery order” for pickup later.

App Actions example from Google I/O 2018 keynote

As shown above, Google provided an example of searching for Infinity War. When the user searched for it, they were prompted with options to buy tickets or watch the trailer. This is a great example of a contextual interface, but this doesn’t just happen like magic. Apps needs to be optimized to allow for this type of interaction.

Headphones and smartphone showcasing App Actions

In this example, Google has placed App Actions in the launch menu. The suggestions are based off of your everyday behavior. In this instance, it is suggesting the presenter call Fiona, as he usually does at this time of day, or continue listening to the song he last listened to since his headphones are connected.

3. App Slices

Similar to App Actions, App Slices also appear in search. But there is a difference. Instead of simply suggesting an action, App Slices use the functionality of an app to display information in search. It can present a clip of a video, allow the user to check in to a hotel, or even show photos from the user’s last vacation.

App Slices showcased at Google I/O 2018

In the example shown here, simply searching “Lyft” brings up the suggested routes in the Lyft app and displays the cost of the trip as well. We’ll learn more about what App Slices are available soon, so be sure to check back to learn more about the potential benefits of this innovation.

4. ML Kit

Part of Firebase, ML Kit (Machine Learning Kit) now offers a range of machine learning APIs for businesses to leverage. Instead of having to build custom ML algorithms for anything and everything, optimize for mobile usage, and then train with hundreds, or preferably thousands of samples, Google now provides “templates” for some common business needs.

ML Kit and templates shown at I/O 2018

Leveraging TensorFlowLite and available on both Android and iOS, ML Kit will make it easier to integrate image labeling, text recognition, face detection, barcode scanning and more. It can even be used to build custom chatbots.

But That’s Not All

There were plenty of other announcements in the keynote and even more on their way as the week goes on. For instance, right after the keynote, we found out that Starbucks had nearly as many orders come through its PWA than via its mobile app. We learned that Google Assistant can now make phone calls to schedule appointments – without the customer service representative realizing it’s a computer. Google announced a new Shush mode to completely eliminate notifications when a phone is upside down on a table, and a lot more.

Even among the four topics covered in this recap, there is more information to come as the week goes on. We’ll dive into each as we get more information back from our Rocketeers in California, so be sure to check back in a couple of days.

May 10, 2018

Google I/O 2018 Rocketeer Recap – Day 2: A Day for Developers

Google shocked the world with its appointment-booking, conversational AI, Assistant Duplex, when the conference began. They covered new UI elements in Android P, unveiled their new TPU 3.0 servers that power their Machine Learning and Artificial Intelligence, and announced Google Assistant will help teach children to say "please" and free up time on your calendar. Believe it or not, all of these announcements were made in the first two hours of the conference. So, what could Google possibly have in store the next two days of I/O 2018? We’re glad you asked.

While a lot of these might not be as flashy as an AI that makes phone calls, there’s no shortage of new and/or updated tools for developers to leverage in the coming year. Here’s what caught the attention of our Rocketeers during the second day of Google’s annual developer conference.

Android Jetpack

Developers can’t just write code and expect it to run perfectly, it needs to be tested – the more often, the better. Back in the day (like last week) developers had to decide whether they wanted to run it on the machine they’re using for development or on a device. Now, they can simply choose which they would like to test, and Jetpack will take care of the rest. The Jetpack Test will even simulate the conditions an Android device “in the wild” would face to make the test more accurate.

Angular

For many companies, Angular is the basis for the majority of their Web Applications. As the reigning king of Web Applications, new features and improvements directly correspond to improvements in applications. Here are a few features that Senior Technical Architect, Jonathan Campos, found to be the most exciting:

Schematics – Customize the generated code for an application; improves development speed.

Angular Universal – A way to render out Angular applications at first request by a user; improves first draw speed and user experience.

Angular Elements – Allows rendering of Angular components without needing to include the entire Angular framework on a webpage.

Ivy Renderer – This remarkable change in rendering can both reduce bundle sizes and improve the initial load time of an application by removing unused code and only compiling the necessary code that changed between releases.

Core IoT

The Internet of Things (IoT) continues to grow, but it’s not getting any easier to manage – until now. Google’s new IoT management tool, Cloud IoT Core, will make it much easier to manage, connect, and grow IoT ecosystems that seamlessly connect to Google’s Cloud Platform (GCP). It’s not just the development that’s streamlined; analytics are also more manageable than ever.

Lighthouse

An underrated feature hidden in Chrome, Lighthouse helps web developers pinpoint areas for optimization to increase the performance of websites. As of I/O 2018, Google is expanding on its feature set. One feature that will help companies the most in monitoring their sites is the added Lighthouse API. This way, businesses can integrate diagnostics right into their Continuous Integration and Continuous Delivery pipelines.

Photos

Google Photos is great on its own, but it doesn’t play well with others. As users take photos, Google Photos is great for backing up and indexing those images. However, finding an image in another app, like a photo editor, can result in minutes of searching through device folders. Yesterday, Google introduced a developer API for Google Photos. This will allow user-permitted apps to search through images directly or by using categories like “documents” or “selfies” as a filter. Director of Android Engineering, Luke Wallace, had this to say about the new API: “Imagine picking a profile photo by just seeing your last 10 selfies in Google Photos, it would be so much quicker than it is today! The API allows for basic filtering of photos, adding photos to Google Photos, creating albums, and even enhancing the albums with more information around the photos like descriptions and map views.”

Support Library

Unless an app has a singular purpose, it’s going to need a menu – among other things. Instead of starting from scratch each time, Google has made it easier than ever to edit UI components for Material Design in the support library. Now, instead of having to reinvent the wheel every time you want a custom interface, you can start with something that resembles a wheel and modify where you see fit. The best part? It’s not just for apps! The support library provides UI components that can be used for Android, iOS, web, Flutter and React.

WorkManager

A lot happens behind-the-scenes when actively using an app and when it’s running in the background. Ever put your phone in your pocket and it seemed unusually warm? It’s probably due to an unoptimized app ravaging your CPU. That’s why Google made WorkManager. With this nifty tool, developers have more visibility into solutions for background work – which will ultimately help developers make more battery-friendly apps.

Closing out Day 2

While this may seem like a lot, this is just the tip of the iceberg. It seems no service, tool, or platform was left untouched this year. What’s even more astonishing is that Google has more releases on the way. Some of the updates we’re learning about are just now being released to the public and some aren’t even out yet. So, be sure to check in every now and then as we explore even more of these new features and services from Google.

Until then, if you’d like to hear more about the updates coming to Android and how Google’s services can improve both your iOS and Android applications, contact us today.

May 9, 2018

Google I/O 2018 Rocketeer Recap – Day 1

During the annual Google I/O event, so much more is released than just what the keynote includes. Sure, most of the big news comes out in the first two hours, but more and more details and announcements come out as Google holds session after session over the course of the three-day event.

Google’s goal this year seems to be quality-of-life updates – both for developers and for the people who will ultimately use the products and services created by developers. Since Google is leaning heavily on Machine Learning (ML) to accomplish this goal, talk of ML and AI permeated the entire conference. The second most important theme of the day was simplification (and ML is helping with that too).

Improvements for Developers

Once developers had a chance to learn more about the technologies originally discussed during the keynote, a trend emerged – Firebase was everywhere. Firebase had previously been used for a few things in app development, like crash reporting and user authentication through other web services such as Facebook and Twitter, but this year Google has made it a tool that every developer should be using. As we mentioned in our Google I/O 2018 keynote recap, Google added several new ML models within Firebase to make applications not only functional, but smart. This isn’t where the Machine Learning stops though. Google also added several new features to help with communication, overall application health, and data management. Senior Technical Architect, Jonathan Campos, believes the services in Firebase are so powerful that this could likely be the way most companies will implement Machine Learning in their applications for the foreseeable future.

Another Google-backed technology facing a revival is the Progressive Web App or PWA. Introduced a few years ago, PWAs are basically light-weight versions of apps that live on the web. This year PWAs were front and center, featuring native integration with Chrome OS along with a host of new Lighthouse tools to create more actionable guidance developers.

Google also stepped up and added a host of new features and development best practices for Android. A key resource to aid in this is Android Jetpack – a set of libraries, tools, and architectural guidance to make it easier to build great Android applications.

Improvements for Users

Digital well-being is not a new idea, but Google is using it to help drive some of their initiatives. The Dashboard in Android P will help users understand where their time is going, and even encourage them to stop using apps that eat up too much time. For those that want to disconnect more, it could make things a lot easier by providing a little external force that can drive behavior change. But it’s not just a person’s time and energy that Android P is going to help with – it’s also going to help extend the time their phone has energy. When monitoring what and when apps are used to help users disconnect, Google will also be using ML to know when to close or block apps from operating in the background, ultimately extending the battery life of Android devices.

Carrying along with the theme of simplicity, Android P features a few new UI adjustments as well. For instance, many interactions will now occur at the bottom of the phone – where it is easiest to access them with one hand. Other UI adjustments include gestures in place of button presses, icons such as the "back" button only appearing when they can be used, or the rotation icon when the phone is turned 90 degrees.

There are other products aside from Android P that received updates users will enjoy as well. For instance, Android TV got an overhauled setup process that reduced setup time by about a third. Thanks to ML, Android TV will also predict what settings users are looking for as well. Google also updated ARCore to version 1.2 and announced Cloud Anchors. As one of our Android Engineers, Chris Koeberle, put it, “Cloud Anchors were the missing element to make it easy to create immersive multiperson augmented reality experiences. Being able to create an AR app that allows people to not just experience but collectively modify a virtual world - on Android and iOS - is going to open up possibilities we can't even imagine yet.”

Closing out Day 1

“As a developer, I don’t focus on what something does, but what it enables me to do. Today Google enabled me to do a lot – specifically around Google Assistant, application development with Firebase and Machine Learning, web application quality with PWAs, and improvements to the Android ecosystem."

Jonathan Campos, Senior Technical Architect

With this many announcements made on day one, it’s hard to imagine what Google has in store for the rest of the week. But, that’s why we have developers on the scene to tell us what they discover, as they discover it. Be sure to check in tomorrow to hear what they stumble upon next.

In the meantime, contact us for more information about the changes coming in Android P. As one of the select Google Certified Agencies, are privy to detailed information, beta releases, and direct access to in-house Google developers, unlike others without this elite certification. With so many changes on their way, it’s more important than ever to build out your digital roadmap, and we can help.

© 2020 Bottle Rocket. All Rights Reserved.