If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?
If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?
Each year, we set aside two full days of work to explore emerging technologies, challenge ourselves to learn something new, collaborate with our co-workers, and make something amazing. Bottle Rocket’s annual two-day hackathon, Rocket Science, is a chance for all Rocketeers to explore new technologies or interests while creating something fun for the office or potentially life-changing for its users.
With nearly 30 projects this year, this recap would be a little hefty if we were to cover them all. So, here’s 5 that we can’t stop talking about.
Project Title: CF Alert
Tech Explored: Progressive Web Apps (PWA)
Cystic Fibrosis is a devastating disease afflicting more 30,000 Americans with more than 75 percent of those diagnosed before turning two years old. As if that weren’t enough, children with Cystic Fibrosis cannot be within approximately 20 feet of each other due to the risk of transferring bacteria that can lead to serious infection in other patients. With that in mind, team CF Alert built a proof of concept PWA that would notify parents when another parent of a Cystic Fibrosis child was within 20 feet of them. This PWA concept was simple, elegant, and easy to use, and could potentially be life-saving for families coping with this serious condition.
Project Title: How Metal Am I?
Tech Explored: Create ML and CoreML
Leveraging Machine Learning for image and audio frequency analysis, this team looked to answer just how METAL something was. Simply snap a photo or let the “Metal Detector” app listen to the sounds around you and it will provide a Metal Quotient for how hardcore it is. While showcasing the app in the photo above, Russell took a selfie which resulted in a 34% Metal score, which is probably largely due to the Led Zeppelin shirt.
Project Title: Empathy Lab
Tech Explored: Analog
Several accessibility experts at Bottle Rocket worked together to create a truly interactive experience to help others empathize with those that are not able to use traditional inputs for technology that we interact with on a daily basis. For instance, how do you tap on a touch screen when you are unable to use your hands? This team set out to convey those struggles. Seen at the top of this page, the blue bar in the image is actually a form of crosshair that will move across the screen and stop when a Switch is triggered. From there, another horizontal line will appear, and the process is repeated. The user must time their taps to intersect the blue lines over the icon or button they want to click. It’s difficult to imagine what it’s like to have to use these devices in everyday life, but this experience definitely shed some light on the importance of designing with these needs in mind.
Project Title: Device Manager Voice App
Tech Explored: Google Assistant and Firebase
As Digital Voice Assistant adoption continues to rise, we continue to look for applications of the devices that go beyond simple “questions and answers.” Just as the name says, this team was able to use a Google Assistant App to manage our Quality Assurance device cabinet (which by the way currently includes over 400 devices). Instead of picking out a device, walking up to the checkout system, entering your name, scrolling to find the device (you get the picture), Rocketeers can now walk up and have a conversation with Google to expedite the checkout process.
Project Title: BLAST'EM
Tech Explored: Image recognition, Arduino, Machine Learning
Tired of Imperials marching across your lawn? BLAST ‘EM has a solution to rid you of those pesky Stormtroopers. By combining Machine Learning and Arduino supplies, this team was able to create an automated Nerf gun that could recognize a Stormtrooper and fire after finding the target. The best part? They saw about a 25% success rate when using a paper mask, but nearly 100% success rate once Founder and CEO, Calvin Carter, ran over and grabbed his homemade Stormtrooper helmet.
As this year’s 7th annual Rocket Science comes to a close, we would be remiss if we didn’t mention just a few other notable projects that our Rocketeers conceived and created over the course of 48 hours.
Our “favorite fail” of the year goes to a team that attempted to teach a computer to play Mario Kart on an iPad, only to find out the emulator and Machine Learning model required different versions of iOS.
Want to learn more about the technologies used in this year’s Rocket Science and how they can benefit your business? Contact us today.
Setting the theme in the first minute of the presentation, Google kicked off its annual developer conference with the phrase “make good things together.” This was present in nearly every segment of the presentation, whether it was about Google facing challenges to better the world or giving everyone the tools to do it themselves.
Here are four key things that caught our attention and that we believe will have the greatest impact on business in the next year: Google Assistant, App Actions, App Slices, and ML Kit (click any topic to jump to it in the blog). There were, of course, more than those four topics that piqued our interest, but those are the ones that will impact businesses the most in the coming year. Keep in mind that several of these are sneak peeks and some will have many more features and capabilities in the coming months – so, be sure to check back in for more information as it is released.
Making its debut two years ago, Google Assistant is much more than it was when it started. At I/O today, three new features were announced for Assistant that could truly give it the edge over Alexa and Siri (and Cortana I guess). Those features are Continued Conversation, Multiple Actions, and improved interactions with Google Assistant Apps (the really big one).
Continued Conversation allows for Assistant to continue providing answers without having to prompt each question with an “Okay Google”. Once a user has completed the conversation at hand, a simple “thank you” will end the interaction and kill the mic. This move allows Assistant to understand what’s conversational and what is a request as well.
Multiple Actions sounds simple but is extremely complex. Simply put, this allows the user to say things such as “what time is it and how long will it take me to get to work?” and get answers to both questions without having to ask them individually to Assistant.
Google Assistant Apps have some new capabilities as well. To get ready for Smart Displays, Google gave Assistant a visual overhaul. Now, information is displayed full screen and can include additional visuals such as videos and images. eCommerce applications can benefit greatly from the visual overhaul as the transition into Assistant Apps is much easier and more natural for the user. Previously, a user had to request to be connected to a Google Assistant App, but now a simple request such as “order my usual from Starbucks” will take the user directly into the Starbucks Assistant App. Seen above, the user can quickly and easily select additional menu items to include in their order via the new visual interface mentioned before. From first request to completed order, this interaction will likely involve fewer steps for the user than going directly into the Starbucks app (given it’s not on the user’s home screen).
Suggestions already appear below the Google search bar while typing. Soon, suggested actions will begin to appear as well. This might not sound like much, but imagine someone is searching for a particular product, like laundry detergent, the Walmart app could prompt an App Action to “add to my grocery order” for pickup later.
As shown above, Google provided an example of searching for Infinity War. When the user searched for it, they were prompted with options to buy tickets or watch the trailer. This is a great example of a contextual interface, but this doesn’t just happen like magic. Apps needs to be optimized to allow for this type of interaction.
In this example, Google has placed App Actions in the launch menu. The suggestions are based off of your everyday behavior. In this instance, it is suggesting the presenter call Fiona, as he usually does at this time of day, or continue listening to the song he last listened to since his headphones are connected.
Similar to App Actions, App Slices also appear in search. But there is a difference. Instead of simply suggesting an action, App Slices use the functionality of an app to display information in search. It can present a clip of a video, allow the user to check in to a hotel, or even show photos from the user’s last vacation.
In the example shown here, simply searching “Lyft” brings up the suggested routes in the Lyft app and displays the cost of the trip as well. We’ll learn more about what App Slices are available soon, so be sure to check back to learn more about the potential benefits of this innovation.
Part of Firebase, ML Kit (Machine Learning Kit) now offers a range of machine learning APIs for businesses to leverage. Instead of having to build custom ML algorithms for anything and everything, optimize for mobile usage, and then train with hundreds, or preferably thousands of samples, Google now provides “templates” for some common business needs.
Leveraging TensorFlowLite and available on both Android and iOS, ML Kit will make it easier to integrate image labeling, text recognition, face detection, barcode scanning and more. It can even be used to build custom chatbots.
There were plenty of other announcements in the keynote and even more on their way as the week goes on. For instance, right after the keynote, we found out that Starbucks had nearly as many orders come through its PWA than via its mobile app. We learned that Google Assistant can now make phone calls to schedule appointments – without the customer service representative realizing it’s a computer. Google announced a new Shush mode to completely eliminate notifications when a phone is upside down on a table, and a lot more.
Even among the four topics covered in this recap, there is more information to come as the week goes on. We’ll dive into each as we get more information back from our Rocketeers in California, so be sure to check back in a couple of days.
For the final day of Google I/O, we wanted to take a moment and share a few favorite updates and pieces of technology that our developers saw during their final day of Google I/O.
Pokémon Go was an interesting case in human behavior. It spread like wildfire and quickly resulted in countless news stories of individuals getting hit by cars, falling off motorcycles, climbing into active construction zones and more (example 1, 2, 3, 4, 5, 6… you get the idea). Now, Google has a solution for that too. Android Engineer, Chris Koeberle, stumbled across this limited-release project Google has been working on. To avoid another craze like Pokemon Go, Google is working to create “safe-zones” where events can occur in GPS-based games, like public parks and malls. They’re also working to cut down development time by making it easier to skin Google Maps using Unity. Again, this is not available to everyone, but we don’t expect it to stay that way forever.
With so many platforms, app integrations, and more appearing these days, it’s hard to know which ones are truly reliable. Whether they lose support in months or are rife with bugs, many are extremely skeptical about these services. However, Senior Technical Architect, Jonathan Campos, wanted to be sure this isn’t applied to every service out there. “One of the worst rumors plaguing companies is that Firebase isn’t a ‘real’ platform. Rumblings of scalability and security that ‘true’ developers desire isn’t available with Firebase – but none of this has any credible backing. It may have been true when it released, but it shouldn’t be grouped with these bad actors any longer. Firebase is different. Firebase is secure. Firebase is scalable. It can support projects on a global scale and is up-to-date with the latest security standards. It is really impressive how much you can do if you just make the leap.”
Powered by IoT Core, Android Things has more support than ever. To put it simply, Android Things is a suite of components and devices that play nice with the Android ecosystem. Instead of fighting to have your little device communicate with your phone, you can spend the bulk of your time making the magic happen.
Director of Android Engineering, Luke Wallace, snapped this photo for us. As it says on the plaque, “This is a gimbal stabilizer built with a raspberry pi and taught with Google’s TensorFlow Lite Machine Learning technology.” If you’re unfamiliar, gimbal stabilizers rotate and turn cameras to keep it focused on a particular object. In this case, the ML modal learned which direction it would need to twist the camera to keep the subject in view. Another awesome Android Things project on display was a sentinel that can monitor your house while you’re away.
All good things must come to an end, and so must Google I/O 2018. Before we close out the week, we’d like to take a moment to thank our Rocketeers for keeping us informed on the latest and greatest from Google.
Be sure to revisit over the next few weeks as we take a deep dive into the new technologies offered by Google. We’ll be looking at how Google’s efforts are improving the human experience, changing how users interact with technology, and how businesses can harness these innovations to improve their own projects and offerings.
If you cannot wait until then, contact us today to learn more about the changes coming to the Android ecosystem, Google’s new Machine Learning and Cloud technologies, and how all of these can improve the way businesses serve their customers.
While a lot of these might not be as flashy as an AI that makes phone calls, there’s no shortage of new and/or updated tools for developers to leverage in the coming year. Here’s what caught the attention of our Rocketeers during the second day of Google’s annual developer conference.
Developers can’t just write code and expect it to run perfectly, it needs to be tested – the more often, the better. Back in the day (like last week) developers had to decide whether they wanted to run it on the machine they’re using for development or on a device. Now, they can simply choose which they would like to test, and Jetpack will take care of the rest. The Jetpack Test will even simulate the conditions an Android device “in the wild” would face to make the test more accurate.
For many companies, Angular is the basis for the majority of their Web Applications. As the reigning king of Web Applications, new features and improvements directly correspond to improvements in applications. Here are a few features that Senior Technical Architect, Jonathan Campos, found to be the most exciting:
Schematics – Customize the generated code for an application; improves development speed.
Angular Universal – A way to render out Angular applications at first request by a user; improves first draw speed and user experience.
Angular Elements – Allows rendering of Angular components without needing to include the entire Angular framework on a webpage.
Ivy Renderer – This remarkable change in rendering can both reduce bundle sizes and improve the initial load time of an application by removing unused code and only compiling the necessary code that changed between releases.
The Internet of Things (IoT) continues to grow, but it’s not getting any easier to manage – until now. Google’s new IoT management tool, Cloud IoT Core, will make it much easier to manage, connect, and grow IoT ecosystems that seamlessly connect to Google’s Cloud Platform (GCP). It’s not just the development that’s streamlined; analytics are also more manageable than ever.
An underrated feature hidden in Chrome, Lighthouse helps web developers pinpoint areas for optimization to increase the performance of websites. As of I/O 2018, Google is expanding on its feature set. One feature that will help companies the most in monitoring their sites is the added Lighthouse API. This way, businesses can integrate diagnostics right into their Continuous Integration and Continuous Delivery pipelines.
Google Photos is great on its own, but it doesn’t play well with others. As users take photos, Google Photos is great for backing up and indexing those images. However, finding an image in another app, like a photo editor, can result in minutes of searching through device folders. Yesterday, Google introduced a developer API for Google Photos. This will allow user-permitted apps to search through images directly or by using categories like “documents” or “selfies” as a filter. Director of Android Engineering, Luke Wallace, had this to say about the new API: “Imagine picking a profile photo by just seeing your last 10 selfies in Google Photos, it would be so much quicker than it is today! The API allows for basic filtering of photos, adding photos to Google Photos, creating albums, and even enhancing the albums with more information around the photos like descriptions and map views.”
Unless an app has a singular purpose, it’s going to need a menu – among other things. Instead of starting from scratch each time, Google has made it easier than ever to edit UI components for Material Design in the support library. Now, instead of having to reinvent the wheel every time you want a custom interface, you can start with something that resembles a wheel and modify where you see fit. The best part? It’s not just for apps! The support library provides UI components that can be used for Android, iOS, web, Flutter and React.
A lot happens behind-the-scenes when actively using an app and when it’s running in the background. Ever put your phone in your pocket and it seemed unusually warm? It’s probably due to an unoptimized app ravaging your CPU. That’s why Google made WorkManager. With this nifty tool, developers have more visibility into solutions for background work – which will ultimately help developers make more battery-friendly apps.
While this may seem like a lot, this is just the tip of the iceberg. It seems no service, tool, or platform was left untouched this year. What’s even more astonishing is that Google has more releases on the way. Some of the updates we’re learning about are just now being released to the public and some aren’t even out yet. So, be sure to check in every now and then as we explore even more of these new features and services from Google.
Until then, if you’d like to hear more about the updates coming to Android and how Google’s services can improve both your iOS and Android applications, contact us today.
Experience Consulting | Product Growth | Transformation
© 2020 Bottle Rocket. All Rights Reserved.