Gemini, Google
Digest more
Google on Tuesday revealed new Android development tools, a new mobile AI architecture, and an expanded developer community. The announcements accompanied the unveiling of an AI Mode for Google Search at the Google I/O keynote in Mountain View, California.
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
I/O presentation, the company revealed AI assistants of all kinds, smart glasses and headsets, and state-of-the-art AI filmmaking tools.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
Google’s AI models have a secret ingredient that’s giving the company a leg up on competitors like OpenAI and Anthropic. That ingredient is your data, and it’s only just scratched the surface in terms of how it can use your information to “personalize” Gemini’s responses.
I pitched the new Google Gemini against ChatGPT for AI image generation – and the results shocked me
It's because the AI image generation trend started happening after ChatGPT got a serious image upgrade in March, while at the time, Gemini was still relying on Imagen 3, which had some limitations.
Ads are coming to Google AI Mode and to AI Overviews on desktop.