Sundar Pachai Tweet:
Thanks to everyone for joining us in person and online for #io17! See you again next year 🙂
Google I/O officially began on Wednesday May 17th and concluded on Friday May 19th. As conferences go the amount of material covered is enough to make you feel brain dead. What I mean by “material covered” is the number of sessions one can actually attend, the number of YouTube videos one can watch, and the number of conversations one can engage in. Adrenaline will only take you so far and in the end fatigue is always the victor.
The following are the items that made the biggest impression on me as I feel they will have a substantial impact on the Google ecosystem.
Artificial Intelligence (AI)
Google has not hidden their belief artificial intelligence is a game changer for the industry and they underscored this belief with the introduction of the new and improved “Cloud TPU” (Tensor Processing Unit). This is a custom designed processor created to handle serious number crunching to enhanced the AI experience. The stated reason for going it alone is Google wanted silicon to run its machine learning algorithms optimized for performance per watt. To put this into context, it is important to note the AI experience is applied to millions of users at any given moment as they use services like Search, Assistant, and Maps.
How powerful is the Cloud TPU? Google states each Cloud TPU, which consists of four chips, delivers 180 teraflops (a trillion floating point operations per second). One of Google’s biggest competitor in this market is NVIDIA and their new Tesla V100 is rated at 120 teraflops. Both of these processors have the ability to be clustered for additional power. For example, Google announced the TensorFlow Research Cloud which allows researchers to access a cloud consisting of 1,000 TPUs rated for a total of 180 petaflops (one quadrillion floating point operations per second) of raw compute power.
Sundar Pachai cited the results obtained by the AlphaGo Project. To refresh your memory, earlier this year, the AlphaGo AI Software beat the world-class Go player Lee So-dol in a 4-1 series.
The version playing Ke Jie (current world champion) is so much more efficient that it uses one tenth the quantity of computation that Alphago Lee used, and runs on a single machine on Google’s cloud, powered by one tensor processing unit (TPU). AlphaGo Lee would probe 50 moves deep and study 100,000 moves per second. While that sounds like a lot, by comparison, the tree search powering the Deep Blue chess system that defeated Gary Kasparov in the 1990s looked at 100 million moves per second. AlphaGo is actually thinking much more smartly than Deep Blue.
The TPU introduction demonstrates Google has the ability to compete in the processor arena. Much like Apple, it would not be a shock to me if some future Google device runs a processor designed by Google.
If you need additional evidence to support the above comments, look no further than Google Lens. Lens is the reincarnation of Google Goggle or Google Search re-invented. It uses machine learning to find, and in conjunction with Google Assistant, to display contextual information by examining images viewed through your phone’s camera lens or saved photos.
A few things Lens can do:
- Tell you the species of a flower
- Analyze the SSID sticker on the back of a router and then log you in
- Display reviews and other information about restaurants or retail stores
It is interesting to see the cross pollination of apps from one ecosystem to another. Google announced Assistant joins Maps in the Apple App Store. It will become even more interesting to see what the adoption rate becomes. Assistant can now use the keyboard as the default input method and leverages AI to enhance the next word suggestions. Exactly how this works will depend on which keyboard you’re using. When using Gboard, suggestions will come from Assistant. If you’re using another keyboard like SwiftKey, you’ll see Assistant suggestions in a supplementary line above the keyboard’s own. In the event of mixed voice and keyboard usage, Assistant will show you the last two voice queries.
One of the things the latest “WannaCry” outbreak demonstrates is the need to keep your OS current. The Android development team understands the importance of this and announced Project Treble to address this need.
One thing we’ve consistently heard from our device-maker partners is that updating existing devices to a new version of Android is incredibly time consuming and costly.
Many folks feel the reason Android updates are so slow in coming is a general reluctance of Google partners to do the work. There is some truth in this belief as the technology industry is always focused on the next big thing; hence the cliche your device is obsolete the moment you buy it.
But honestly, it’s a lot of work as the following high level steps illustrate.
- The Android team publishes the open-source code for the latest release to the world.
- Silicon manufacturers, the companies that make the chips that power Android devices, modify the new release for their specific hardware.
- Silicon manufacturers pass the modified new release to device makers — the companies that design and manufacture Android devices. Device makers modify the new release again as needed for their devices.
- Device makers work with carriers to test and certify the new release.
- Device makers and carriers make the new release available to users.
Adding to this burden are those cases where a manufacturer has augmented the UI or needed to perform customization to meet the requirements of their locale. The goal of Treble is to reduce the burden through modularization. The good news is Project Treble will be coming to all new devices launched with Android O and beyond. The bad news is Project Treble will not be coming to current or older versions of Android.
Android Go is Android tuned for economy devices to ensure the richest possible experience. The goals are as follows.
- Ensure Android runs smoothly on entry-level devices.
- Apps built for these devices are highly optimized.
- Play Store highlights apps tuned to the needs of users coming online for the first time.
This addresses the needs of emerging markets and anyone who doesn’t want or can’t afford a smartphone.
Very few announcements brought cheers from the audience, but Kotlin was one of them. What is Kotlin? Kotlin is a programming and development language from the folks at JetBrains. My quick look at Kotlin shows a powerful syntax which results in less lines of code. The environment is tightly integrated with Java which means developers can use the Java Libraries they use now. Kotlin code can be converted to Java and vice versa.
IDE support is crucial and beginning with Android Studio 3.0, support for Kotlin is bundled directly into the IDE.
I predicted Google would announce their version of the Amazon Echo Show and in many ways they did; although not the way I expected.
In the past, Google Home was always listening and waiting to respond to “Ok Google”. In the future, Assistant will be working in those idle moments and pulse its lights to let you know if it has something to announce. When you notice the lights, simply say “hey google, what’s up?” Google says what it pushes will be limited to only the most important information, and if done correctly, can be extremely useful.
Free Phone Calls
If the recipient of the call is in your Google Contacts and has a valid telephone number associated with it, you’re good to go. This is similar to what is currently available with many “hands free” auto systems with the exception the auto system will use your cell phone to place the call. If desired, outgoing calls can be masked to appear like they’re coming from your phone. With the newly announced multi-account voice recognition support; if instructed to call “Mom”, Assistant knows the correct Mom by the voice which made the request.
What about incoming calls? Google is taking a wait and see approach on this feature. As Home does not have a camera, I don’t see video conferencing in the cards of the current version of this product.
Google Home is becoming a conduit to your other devices as it can send content to your phone or TV. This means a request for directions may result in a Google Map being sent to your phone or a request to see a video may result in the video being played on a connected TV. Time will tell if the best user experience is achieved by embedding a screen into a device like the Echo Show or to connect to devices you already own.
HTC announced the planned release of the standalone Daydream compatible Vive VR headset. Not much was said about its features or specifications but it is assumed this device will be powered by a Qualcomm Snapdragon 835 SoC and will support the new Daydream WorldSense motion tracking (see video). A similar headset is expected from Lenovo.
These new headsets solve a couple of problems.
- The requirement of using a Daydream compatible Phone to view Google VR content is too restrictive. At this point in time most phones are not Daydream compatible.
- In some circumstances like the classroom, stand alone headsets make more sense.
- Not all Google VR software is Daydream compatible. You may recall earlier this month, Google acquired Owlchemy Labs, the maker of the highly-popular VR game titled Job Simulator. Job Simulator and other Google VR products like Tilt brush and Google Earth VR don’t work on Google’s Daydream headset and have been released on the Oculus Rift and HTC Vive.
No word on pricing but new VR headsets to support Microsoft Windows 10 are priced at $299.
No official Chromebook announcements on the main stage but the chatter from Kan Liu, senior director of product management for Chrome OS, is the splashy product launch we were supposed to have earlier this year is coming (no mention of products). The much anticipated Samsung Chromebook Pro is expected to be released on May 28th complete with support for Android Nougat applications. It was not mentioned if support for Nougat will be extended to the Plus model on or around the same time. Nothing on the Samsung or Google Store, but a Chromebook Pro page is live on Amazon and should begin to appear on other sites soon.
Liu stated the Chrome OS Android release cycle will not be dependant upon other devices which means updates could arrive on Chrome OS before they arrive on phones. The other feature mentioned is Chrome OS will now determine what a selected Android app will support and provide the appropriate options.
Finally, Android for Chrome OS will remain in beta for a while longer which means there may be glitches and functionality that doesn’t work.
In today’s world everyone is a photographer and carries a camera which is a part of their phone. Google claims 500 million monthly G+ users are adding 1.2 billion photos per day. It is guaranteed at some point in time we will want to find, view, and share them.
Users of Google Photos will soon be able to automatically share or filter specific photos to auto-share by date or topic. A new Suggested Sharing feature will use facial recognition to prompt users to send photos to their friends (similar to Facebook’s Moments app). Available today, Google Photos uses machine-learning algorithms to classify the objects in photos to make them searchable, so users can easily find all their pictures of dogs, cats, or sunsets.
Reminiscence of a photo scrapbook, Google Photos will allow you to assemble content into Photo Books.
These are the cards Google has placed on the table and it is certain products introductions in the next six months will feel a bit familiar from their inclusion.
For those who are interested, the videos are still available for viewing on YouTube and the Google I/O 2017 site.