Google is looking to “reimagine” Android with its Gemini AI, touting it as a “once-in-a-generation event to reimagine what phones can do.”
At Google I/O 2024, the search giant said it would integrate AI into Android in three ways: integrating AI search into Android, making Gemini the new AI assistant, and leveraging on-device AI.
Translated into everyday language, this means that more AI search tools, like Circle to Search, will be front and center on Android. The AI-powered tool, which can identify objects physically surrounded by circles and text in photos and on screen, will be boosted to tackle more complex problems like graphs and formulas later this year.
Gemini AI, which can be found right now on the Google Pixel 8a, will become the foundation of AI for Android, bringing multimodal AI (the technology to process, analyze and learn from information and inputs from multiple sources and sensors) to the mobile operating system. system. All of which makes this one of the biggest AI announcements of Google I/O 2024.
In practice, this will mean that Gemini will work on all kinds of apps and tools to provide suggestions, answers, and contextual prompts. An example of this was the use of AI in the Android Messages app to produce AI-created images to share in chats. Another is the ability to answer questions about a YouTube video a person is watching and extract data from sources such as PDF files to answer very specific queries, such as a particular rule in a sport.
What's more, Gemini can learn from all this and use that information to predict what a person might want. For example, knowing that the user has shown interest in tennis and chats about the sport, he could offer them options (pun intended) to find nearby tennis courts.
The third aspect of AI on Android is to ensure that much of the intelligent processing can be done on the phone, rather than needing an internet connection. Gemini Nano therefore provides a fundamental low-latency model for integrated AI processing, with multi-modal capabilities; This effectively allows the AI to understand more about the context of what it is asked to do and what is happening.
An example of this in action was how Gemini can detect a scam call seeking to scam a person out of their banking details and alert them before fraud occurs. And since this processing is done over the phone, there is no worry about a remote AI listening in on private conversations.
Likewise, AI technology can use its contextual understanding to help provide accurate descriptions of what a visually impaired person is looking at, whether in real life or online.
In short, Google intends to make an AI-focused Android more useful and more powerful when it comes to finding things and getting them done. And with the Gemini Nano coming with multi-modal capabilities to Pixel devices later this year, we can surely expect the Google Pixel 9 series to be the first phone with Android reimagined.