June 4, 2018

The Most Important Feature for Brands from the WWDC 2018 Keynote

Every year, developers make the Great Migration to Apple’s Worldwide Developer Conference (WWDC). This week-long event is where Apple announces all the latest tech and tools coming to the suite of Apple products. Like Google I/O, WWDC kicks off with a keynote that hints at all the things that are to come throughout the rest of the week.

This year, it seems many tech companies are focusing on “quality of life” (QOL). Usually the phrase “quality of life update” refers to a software update that makes many changes to an application or game to improve the overall experience – usually a combination of bug fixes, interface tweaks, performance enhancements, and anything else that improves someone’s experience with a particular piece of software. However, most recently, we’ve noticed more and more emphasis being placed on the user’s QOL rather than the software. Both Apple and Google have released features to help users spend less time on their phone and more time with those around them. Digital Health is not a new concept, but it does seem to have gone by the wayside in recent years. Although not the one key takeaway that we chose to highlight in this article (but it was a close second), we would be remiss if we didn’t mention the hot topic of app optimization. Quite a bit of time was spent covering how developers could and should optimize apps in every way possible – in file size, performance, and amount of time users need to spend in it to accomplish the desired task (which you should be doing anyway).

Speaking of QOL, Apple spent a majority of the keynote announcing new features for their apps and devices. Things like Search Suggestions for photos, updates and UI changes for several first-party apps, new workouts on the Apple Watch, and much more. They also announced that you could FaceTime with 32 people while using your own emoji, aptly named Memojies (below).

Tim Cook and other Apple employees using Facetime and the new memojies

A majority of these updates benefited the ultimate end users of Apple devices while some helped developers more easily and effectively build on Apple’s platforms. There was, however, one update that stood out above the rest as the “killer feature” for apps this year. And that feature, is Siri Shortcuts.

Siri Shortcuts

These. Are. Big. Siri Shortcuts will change how a lot of people interact with a lot of apps. Since the emergence of DVAs (Digital Voice Assistants), the biggest barrier for adoption has been the learning curve for users. “What can I ask it? Was it how I phrased it? I didn’t want it to open that app to do ____.” are all statements you may have muttered to yourself when trying to communicate with your Google Home, Home Pod, or Amazon Echo. But Siri Shortcuts are going to change that. Instead of adding voice-controlled features to an app that users may or may not ever discover, developers can now prompt users with a button to “Add to Siri.” This does not add a particular action to Siri­­, but instead it allows users to create their own custom phrase to activate a certain feature that the app allows. For example, instead of having to say “Hey Siri, play my ‘Running’ playlist in Spotify,” someone can create a custom phrase for “Hey Siri, I’m going on a run” and the outcome will be the same.

arrow pointing to an iPhone X displaying Siri Shortcut

This doesn’t sound like much, but this could change Siri’s role to many as a peripheral accessory of the iPhone to an app necessity. Instead of having to try several times to get a request to work, users can simply make their own. As we aren’t exactly sure on how this will work just yet, we are assuming it will be based on deep linking.

example actions in a Siri Shortcut series

Another reason apps need to be Siri-ready is that Shortcuts will not just be for individual actions, but for a series of actions. Seen above, when asked “how’s the surf,” Siri began running through the requests the user had previously set up – like checking the weather and getting directions to the beach. Other examples Apple provided were Siri Shortcuts for “time to go home” or “let’s go to work.” In the “let’s go to work” example, Siri automatically knew to order a coffee from Starbucks that the user gets on the way to the office every day. So, for example, if your brand allows pick-up for groceries, you may want to integrate Siri in a way that allows people to create a grocery list of common items they need each week so users can order with a simple phrase.

Platform State of the Union keynote slide explaining the best uses for Siri Actions

By creating useful Siri integrations that can become part of a larger, daily/weekly/monthly routine instead of a one-off request, branded apps can quickly become a necessity of life even if they aren’t being manually launched. Like in the example above, the user with the morning routine didn’t open the Starbucks app, but they still bought a coffee.

Stay tuned for more from Apple’s developer conference or contact us today to learn more about Siri Shortcuts and how your brand can best leverage them.

February 15, 2018

What You Need To Know About AI

Lately, when clients come to me as a consultant, Artificial Intelligence (AI) usually comes up in our conversation. And when I’m asked, “How do I use it?” that tends to actually mean “What is it?” Let’s reach a basic understanding of AI so when you’re ready to explore what AI can offer, your discussion can be as productive as it can be.

What Is Artificial Intelligence?

As the name implies, AI is the intelligent behavior of machines. Most companies could utilize AI to interpret complex data. Here’s how it works (in a very simple way): The AI model asks a set of data a question and returns an answer. To accomplish this task, an AI model needs to understand the data it’s interpreting. So, for AI to deliver accurate, useful information, an AI model needs to be trained on the data it’s given. We’ll get into that soon, but first let’s talk about that data.

Learning From Data

For Artificial Intelligence to work, it needs to learn from specific kinds of data. With organized, not random, data, an AI platform can learn what it needs to. Let’s say you want to train an AI model to identify dogs in images. Organized data would consist of animal images, including dogs, to help the AI discern what is and what is not a dog. Random data (in this case, images of tables, lawnmowers, anything not reasonably close to our concept of a dog) doesn’t help the AI distinguish dogs from other animals. Know what’s in your data and you should be able to avoid any randomness.

At this point, I should clarify that AI isn’t actually telling you what something is — AI tells you what something probably is not. This is determined through a prediction percentage -- you’re not teaching an AI model to know an animal is a dog but rather training it to tell you it’s a certain percentage confident it recognized a dog. If you’ve been feeding your AI model with images of only dogs and cats, then introduce an image of a squirrel, your AI will be less certain of what it sees. But, once you teach the AI model what a squirrel looks like, the AI can discern with more certainty.

How To Train Your AI

Machine learning, the ability for computers to learn without being programmed, can take place a couple different ways. One way is supervised learning, where an AI model infers a function from labeled training data. Those images of animals I mentioned earlier would be labeled “dog,” “cat,” "hippopotamus,” etc. to help the AI learn. The other machine learning method is unsupervised learning. This is where machine learning draws inferences from data sets consisting of input data without labeled responses — you provide a bunch of pictures of animals with no descriptions and let the AI figure out what is and what is not a dog. Remember to, with either approach, provide organized data that your AI will learn from.

Going With Google

Google provides a lot of data sets and pre-trained AI models for purchase. But, they’re all about object recognition and will only do a good job of recognizing things that existed when the model was trained. So, a Google AI model may have learned what a bike and pogo stick are at some point, but a newer invention, like a Segway, could confuse it.

Beyond Cats And Dogs

I’ve led or been on some teams that have worked on training data for tasks other than image recognition, like classifying audio samples or, during an innovation session, recognizing what restaurant is delivering food by identifying food service workers and recognizing what was being delivered. The latter is an excellent example of what can happen when you train very specific models. In this application, the AI model learned from uniforms, not beverage cups, cars or even people. The data set provided in this case included shapes of what delivery drivers could be carrying and the clothes they wore.

The accuracy of your AI model is really about the bucket of data you give it. You need a lot of pictures, and they need to be of a similar quality. The more similar they are, the less training you likely need.

What Could Go Wrong?

Okay, our hypothetical AI is up and running. What is at stake if it’s wrong? Within your own organization, determine the impact of a false positive or false negative. AI that determines what is and isn’t email spam could afford to let some spam through but could create problems if it marks something important as spam. Imagine the consequences of AI providing results of x-rays with a false negative cancer diagnosis.

Now, if you feel more comfortable with how AI works, you can begin the challenging task of figuring out where it fits in your organization.

For information about Artificial Intelligence and Google Assistant, download our Google Assistant POV by filling out the form below.

Originally published on Forbes on Jan 8, 2018.

June 19, 2017

Top Takeaways from WWDC 2017 that Weren’t in the Keynote

While there were a lot of big announcements during this year’s WWDC Keynote, there were even more our Rocketeers learned during the sessions following it. Some of these barely made an appearance at the conference, but we think they’re some of the most exciting updates yet. If you are interested in learning more about these topics, you are also welcome to watch our webinar that aired Wednesday, June 14, 2017.  Click here to watch.

Business Chat Could Change Everything

Any business large enough to have a call center or customer support group should take note. Business Chat opens a support line directly in iMessages making it easier than ever help solve everyday customer problems with a tool that’s familiar to everyone. This interaction can begin from a button in an app, a link on your website, a CTA in an order confirmation email, or pretty much anywhere else you’d want to put it. Within the chat, you can share files, images, product images and/or videos, and much more. For example, let’s say a customer wanted to upgrade or change their seat on a flight – the airline could send them a layout of the seats available and can even charge for the upgrade through Apple Pay directly in iMessage. If you wanted to schedule a meeting, the details will be saved directly to the calendar.

Business Chat is available today and already integrates with LivePerson, Salesforce, Nuance, and Genesys.

CoreNFC Now Open to Developers

Near Field Communication (NFC) has been around for several years now, and the odds are good that you’ve used it and didn’t even know. NFC can be used for a wide range of applications, but to date it has primarily been used for mobile payment through apps such as Apple Pay. However, that may change very soon as Apple has officially opened the iPhone’s NFC functionality to developers. In true Apple style, they have taken every precaution to ensure user data remains secure. Each session must be initiated by the user and developers can only read, not write, data from an NFC tag. This means there will never be an accidental scan or possibility of someone pulling information from your phone. Brands will be able to leverage NFC for everything from presenting more information about a painting in a museum to adding items to an account in a hotel – but they will not be able to bill you directly from the interaction.

QR Reader Added to Default Camera App

In the United States, QR code sightings can be uncommon depending on where you live. In eastern markets, they are much more common. QR codes failed to reach widespread adoption in America because some didn’t know what to do with them and others didn’t see the value in downloading an app specifically for reading them. Now that Apple has integrated a QR reader into the default Camera app, that could change. However, western adoption of QR codes relies on content creators and advertisers just as much as, if not more than, users interacting with them. QR codes can be used for a wide range of applications such as sharing a playlist, opening a YouTube video, downloading an app, adding an item to a cart, and much more. The more interesting the experience, the more likely users are to give QR codes a try. To best leverage them, think guerilla marketing mixed with surprise and delight – people should feel as though they found something special rather than an advertisement, and where it takes them should almost be a reward.

CoreML Brings Machine Learning to the iPhone

The ways Google and Apple have approached artificial intelligence (AI) and machine learning (ML) are very different. One of the biggest differences is where the “magic” happens. Google’s approach is in the cloud while Apple’s is on-device. Processing the information on the iPhone itself not only provides a much faster experience but a much more private one. CoreML has three offerings at the moment, including Vision for image analysis, Foundation for language processing, and GameplayKit for NPC (non-player character) behavior, pathfinding, and more. While GameplayKit will mostly be used by game developers, Vision and Foundation can be used for a multitude of applications. For example, Vision can be used to recognize barcodes. You could use Vision to show more information about a product after a consumer scans the barcode or, with some training, teach Siri to recognize the product itself so that they can simply take a picture of the product to learn more.

Siri’s New Extensions

Three new extensions are now available to developers through SiriKit. For apps that allow you to make or check off items on a list, Siri can now be integrated into the app to allow users to take actions around those lists. The other two extensions, Points and Domains, can be leveraged for rewards and loyalty points. Points will allow users to ask Siri questions, such as “do I have enough points to book a flight to LA?” and Domains will allow users to scan visual codes such as loyalty points on a purchase to have them automatically added to your account in the app. With Siri’s new extensions, the customer experience in apps can be improved greatly as Siri makes it easier for consumers to keep track and add reward points to their accounts.

The App Store Gets an Overhaul

There are some big changes coming to the App Store. First off, Apple has completely redesigned the store and added several sections to improve the app discovery experience. There will be three primary sections to the store – Apps, Games, and Today. To make it even easier to decide if you want to download an app, Apple has also added the option for developers to upload up to three videos to showcase gameplay, features, and more in apps. What’s even more exciting is that Apple now allows developers to decide if they want to reset their reviews when uploading a new version of an app. Believe it or not, some developers would allow bugs to go unfixed for weeks if they had high ratings for their app to avoid having the ratings potentially drop. Now hotfixes are much less stressful for brands and developers as they can effectively push out several builds of an app and retain their ratings and reviews.

Check back for more updates as these new features and tools become available. In the meantime, please feel free to contact us with any other questions you may have.

June 2, 2017

WWDC 2017: What We’re Excited About

Just as our Rocketeers are coming down from Google I/O fever, Bottle Rocket’s iOS Engineers are gearing up for Apple’s 2017 Worldwide Developers Conference (WWDC). We’re sending some of our Rocketeers to the event where they anticipate announcements regarding Siri and many other surprises that present and potential clients can use to grow their businesses and further connect with customers. We’ve asked some of our Engineering Jedi (our lead engineers) what they hope to see at WWDC this year. Here’s Russell Mirabelli, Ryan Gant, and Josh Smith with expert insight (and plenty of tech talk).

Apple’s Siri-enabled Speaker

As a potential competitor to Amazon Echo and Google Home, this speaker is rumored to be powered by one of Apple’s own A-series arm processors and run a variant of iOS. It is also thought to use some form of Beats technology and support AirPlay. Expected to carry a premium price, the speaker could feature high-end audio with one woofer and seven tweeters built in. If Apple does release its own smart speaker, our clients could easily leverage the code already written in their iOS apps and bring them into the home. This could be yet another platform our clients could utilize, as it opens up conversational interactions between brands and their customers.

Utilizing machine learning, or SiriKit, within an app could make the difference between having the next new thing, or having an app that'll be outdated and underused in five months. So, brands should watch for the addition of a Siri-enabled smart speaker closely, since there’s a pretty good chance Siri will get some improvements to support it.

SiriKit Improvements

It's a safe bet that we'll get quite a few new intent domains for SiriKit, which will bring Siri integration into many new applications. When Siri expands, it brings with it a whole new way for users to interact with your apps. Imagine asking Siri for something and your app giving you exactly what you want. There’s a good chance Apple will expand SiriKit to include more domains outside of the current Ride Booking, Messaging, Photo Search, Payments, VoIP Calling, Workouts, Climate, and Radio.

If these enhancements occur, we would be able to leverage Siri in both the speaker and on iOS devices in these ways for clients:

  • Order your favorite food with just your voice
  • Instantly play an episode of your favorite show just by asking
  • Determine the newest videos available on your favorite TV anywhere app (or what comes on tonight)
  • Ask Coca-Cola Freestyle to pour a saved mix by voice
  • Perform a search for a flight or hotel and book using only your voice

Developer Tools

Steel yourself—this is straight developer talk. Our iOS developers love new tools. We know what's next in Swift for this year because it's been developed in public, but it will be nice to have those updates rolled out to our developers. Object serialization being incorporated as a language feature has our iOS team excited—this will lead to more consistent code across all our projects.

And, of course, we're about to get some new goodies in Swift. Updates to the compiler will not only help with compile times but, with any luck, we'll also get more useful error messaging, which will increase development speeds and quality of life. Our Rocketeers would also love to see another one of Apple's main apps become more open to extension. Last year we saw a little opening into Apple Maps, and it would be great if they continued expanding that. Something else we hope for, but don't really expect, is easier keychain support. Some developers avoid storing data in the keychain because it requires some C-level API usage. Apple could revamp that interface for easier accessibility via Swift. This would serve all of Apple's users by ensuring that more apps protect user data.

One area that we might see some improvements is in the persistent caching of objects in a local store. Core Data, although powerful, is cumbersome to use in Swift, and writing your cache in a keyed archiver is prone to errors. It'd be fantastic to have a solid solution that's capable of adapting both NSObject subclasses and Swift structs into a lightning-fast storage file format.

This is exciting stuff! Is your brand ready for what may come from WWDC? Keep an eye on our blog for the latest from WWDC, or get in touch and share your vision for engaging your customers via mobile or voice.

June 23, 2016

5 Key Takeaways for Brands from WWDC 2016

There was no shortage of announcements made this year at Apple’s annual World Wide Developer Conference (WWDC). We’re going to take a closer look at a few updates coming this fall to Apple’s suite of devices. For a detailed recap of the announcements made at WWDC, read our WWDC Keynote Recap.

Hidden amidst Apple’s announcements of their updated apps is the emergence of a new theme in iOS: extensions. Maps, Messages, Siri, and even notifications received robust updates that allow developers to engage with users in more ways than ever. What Apple seems to be heading towards: extensions are the new apps.

What are extensions? Think of them as add-ons for Apple’s first-party apps, such as Maps and iMessage. These add-ons will range from shortcuts to extended functionality. For example, with Maps, you will be able to look up a location, see hotels in the area, and book a room without having to ever leave the app.

 

Extensions add new touchpoints in iOS

Maps extensions are critical for restaurants, transportations services, hotels, retail and more since users will be able to book dinner reservations and request a ride to the location – all while using Maps. While message extensions are not practical for every brand, they should be considered as they will be a great benefit for those who can find a connection with their product or service. There is even an iMessage App Store, which means iMessage extensions have a dramatically increased chance of being discovered. Siri extensions currently have limited uses, but they are certainly an interesting start to a new line of features. Also, with previews available for notifications, it’s important to consider what these interactions will look like for your app while also considering the user’s privacy.

 

Better UX on watchOS

The latest Apple Watch update will allow it to learn which apps are used most and prioritize updates and processing power accordingly. Improved launch times and background updates make interactions almost instantaneous. No more raising your wrist, waiting for the app to load and waiting for the content to refresh. Also, widgets are now available to developers. Brands must consider whether widgets are a valuable add-on for their app as users will now be able to basically save and scroll through their favorite apps on the Apple Watch.

 

Single sign-in on tvOS

Authentication can be an infuriating process when using the same account across multiple apps. Now, for pay-TV channels on Apple TV, users will be able to sign-in once and access all of the apps available to them through their cable provider. This may not be big news for individual content creators, but for any brand with a TV channel, it’s a vast improvement for accessing your content. Users who originally “didn’t feel like taking the time to log-in again” will now simply have to download your app to enjoy the content.

 

Improved mobile communication with macOS

OS X, now macOS, mainly received quality-of-life improvements in the Sierra update. The Apple Watch now allows auto-login on macOS and ApplePay is available online with mobile authentication. This added continuity and communication between devices creates opportunities for new interactions. One industry that will specifically benefit from ApplePay’s new online presence is e-commerce. Consider this: someone’s shopping online and they’re ready to check out, only to realize they need to enter their credit card information and they left their wallet in the bedroom. With ApplePay, they’ll be able to login to their account, a confirmation request will appear on their iPhone, they approve their purchase, and that’s it. Not only does it provide added security, it cuts down on time spent making online purchases; especially on websites that a user may not have previously visited.

Information is becoming easier to share between devices and is a vital factor in user experience between screens. If there is a mobile app equivalent of a macOS application, consider how to leverage this new functionality for added use between devices; if there is not a mobile companion app, you may want to consider creating one.

 

Siri is now available to developers

Integrations are limited at the moment, but this is a very interesting update. One subtle announcement made that makes this even more exciting is the ability to include speech recognition in any iOS app. This furthers the trend of omnipresent interfaces, such as the Amazon Echo or Google Home. Instead of having to physically interact with technology, omnipresent interfaces allow interaction with a device without having it near you. Apple is heading towards a connected home and further connected ecosystem in their suite of devices. By integrating Siri into apps and macOS, seamless transitions between devices and nearly conversational control of them are on its way. What form this will take varies by app and brand, but it’s an exciting addition for future opportunities nonetheless.

 

Contact us today to learn how your brand or app can benefit from the latest updates in the Apple ecosystem.

Bottle Rocket Logo
digital transformation redefined.

Bottle Rocket is a digital transformation company that leverages a unique Connected Customer mindset to build technology-enabled solutions that propel businesses and delight customers.

© 2019 Bottle Rocket. All Rights Reserved.