Artificial intelligence has become the backbone of everything Google does
Hours before Google CEO Sundar Pichai stepped onto the stage at his company's annual I/O developer conference, a big clue was given about the direction of both his keynote address and the company as a whole. The entirety of the Google Research division was renamed Google AI, a move the company said was a result of it "implementing machine learning techniques in nearly everything we do".
Indeed, Google uses artificial intelligence and machine learning in everything from Gmail and battery management in Android, to gathering news headlines, creating robotic voices which sound human, adding color to century-old photos, and teaching autonomous cars to drive in the snow.
Google's use of artificial intelligence is spreading wide and far, so here is a rundown of every AI development mentioned at this year's I/O keynote.
AI means Assistant can make phone calls on your behalf
The star of the show was Google Assistant and its new Duplex feature. Although not available to the public just yet, Google showed off how the Assistant's ability to hold natural conversations meant it could book a hair appointment at a salon and make a restaurant reservation, speaking to real humans, without any help.
Using recordings of human conversations, Google has taught its AI how to speak with a person — a person, if the demonstrations are to be believed, who may not even know they are talking to a computer. The Assistant uses discourse markers and filler words like 'um' to sound more natural, and although it can only operate in a few set scenarios for now, Google says its intelligence will help us save time and get on with the more important parts of our day. The customer services sector must also surely be interested in this technology.
Read More:
Aware that such a powerful tool could pose ethical concerns over people not knowing who (or what) they are speaking to, Google says: "It's important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We'll be experimenting with the right approach over the coming months."
Google's AI efforts have also resulted in six new voices coming to Assistant, and a seventh in the form of singer John Legend will arrive later in 2018.
AI adds color to old black-and-white photographs
Another applause-worthy demonstration of AI was when Google showed off how an update to its Photos smartphone app can add color to old black-and-white images.
AI is already used by Photos to automatically create animations, stylized photos, movies and other content, but now the app will suggest AI-powered fixes to underexposed photos, offer to share photos with friends Google recognizes in the photos, and add color to old images.
Called Colorize, the new tool isn't ready just yet, but a demonstration at I/O showed how it uses AI to add some color to an old black-and-white photograph. Grass is turned green and, while not magically turned into a brand new photo, the image is given extra life through the addition of basic color.
Gmail uses AI to write your emails for you
An AI feature ready today is Smart Compose, which will roll out to Gmail users over the coming weeks. Smart Compose builds on Gmail's current ability to suggest quick one- or two-word replied based on phrases you commonly use, but is almost capable of writing entire emails.
Smart Compose uses AI to understand the context of the email you are replying to, or the one you are writing based on its subject line and who you are sending it to. The AI then acts like a highly sophisticated predictive text system to complete sentences after you type just a couple of words. As well as taking informed guesses about what you want to say, data like your street address is automatically inserted at the right time, like when you type the first few characters of "my address is".
Google is already known for using AI in its Android operating system, particularly with the camera of the Pixel 2 smartphone, which manages to take better photos than its rivals despite using just one lens compared to their two. Google's AI is far better at producing 'bokeh' background blur in portraits than the multi-lens systems of the iPhone X and Samsung Galaxy S9.
Android P uses AI to save battery
For Android P, which is due later this summer but can be downloaded as a beta now, Google uses AI for a new feature called Adaptive Battery. Using technology developed by DeepMind, an AI company owned by Google parent Alphabet, Adaptive Battery only draws power to run apps it knows you are using or are likely to use, while reducing the power demands of apps running (but not recently used) in the background.
AI is also used by Android P to learn how you manually adjust the screen brightness, then take over this task for you in a bid to save battery life and make viewing the screen more comfortable. Finally, Android P shows a carousel of apps when you swipe up from the home screen; these apps are chosen because the phone's AI thinks they are what you're looking for, based on use habits.
Updated Google Lens has AI for recognizing dogs, translating menus
Coming in a few weeks, an update to the Google Lens app uses AI to recognize objects, pictures, buildings, animals, paintings, text and much more. Lens is already in some LG phones, including the recently launched LG G7 ThinQ.
The app uses your smartphone camera and the Assistant to offer places to buy a piece of clothing you show it, or information on the breed of dog you just took a photo of. Lens can also translate writing, such as menus while on vacation, or take text from a physical book and let you search, copy, paste, share and translate it on your phone. You can also use Lens to input text from a restaurant menu then find images, recipes and YouTube videos of each meal.
Google News uses AI to pick stories relevant to you
A redesigned Google News app uses AI to pick articles it think you might be interested in reading, and this AI improves the more you use the app. The company says: "The reimagined Google News uses a new set of AI techniques to take a constant flow of information as it hits the web, analyze it in real time and organize it into storylines...Google News understands the people, places and things involved in a story as it evolves, and connects how they relate to one another."
Waymo drives through snow with help from AI
Finally, Google gave an update on its autonomous car division, Waymo. Using artificial intelligence and deep learning, the company showed how its cars are being taught to drive in snow.
Wintery conditions don't just make roads slippery and white lines tricky to identify. Falling snow also impacts the car's ability to see ahead. Machine learning is used to filter out raindrops and snowflakes, which cause a great deal of sensor noise (the pink in the image above), and give a clearer picture of the surrounding environment.
A blog post published by Waymo this week explains how AI is also used to help navigate unmapped routes, such as through construction zones, how best to yield to emergency vehicles, and how to give space while passing a car which is parallel parking.
It was reiterated at I/O this week that Waymo is currently running completely driverless test vehicles on public roads (with no one sat behind the wheel), and that a driverless taxi service will be available to the public before the end of 2018.