So that’s it from Google today – with a somewhat abrupt pull of the livestream plug at the end of the event.
Overall it was pretty underwhelming and felt like a defensive ploy to pop Microsoft’s AI hype balloon. While there were a few small announcements — an expansion of Indoor View for Google Maps, a “search screen” feature for Google Lens, and a demo of Google Bard in Search — there was certainly nothing new on the scale of Microsoft’s new chat feature for Bing and Edge.
Google slightly undermined its own AI event by prematurely releasing a Google Bard preview a few days ago – and we haven’t really gotten any new information on exactly how it’ll work when it comes out in the “coming weeks” of the is made available to the public”.
That’s right, we want to see if the Google Maps immersive view is working in London yet – thanks for tuning in, and keep an eye out for new updates here as we get more official post-event information from Google.
There were a few smaller announcements to wrap up Google’s presentation, along with an appearance from the blob opera (opens in new tab).
Google has also announced a number of new AI-powered arts and culture features. New tools include the ability to search for famous paintings by thousands of artists and study them in minute detail.
The company says it’s making a conscious effort to preserve endangered languages as well, using AI and Google Lens to intuitively translate words for common household items.
Hmm, the live stream seems to be down now – maybe Microsoft’s Clippy pulled some nefarious tricks. Hopefully we’ll be back soon folks.
Good news for electric car owners. Google is also rolling out new Maps features for EV owners to make sure you have enough charge.
“To eliminate range anxiety, we use AI to suggest the best charging stop, whether you’re taking a car trip or just running errands nearby.”
It takes into account the traffic, charge level and energy consumption of your journey. It also shows “very quick loading stops” for a quick boost.
Google is now addressing the lesser-known “Indoor Live View” for airports, train stations, and shopping malls. AR arrows are used in a few select cities and locations to show you where things like elevators and baggage claim are – pretty damn useful, if a little limited at the moment.
Thankfully, Google is expanding it further with its “biggest expansion of Indoor Live View yet,” bringing it to 1,000 new venues at airports, train stations, and malls in London, Tokyo, and Paris “in the coming months.”
Google Maps Live View Search combines AI with AR to help you find nearby things like restaurants, ATMs, and transportation hubs visually by looking through your phone’s camera.
It’s already live in five cities on Android and iOS, and will be expanded to Barcelona, Dublin and Madrid “in the coming months”. Now we get an outdoor demo – you tap the camera icon on Google Maps to show you real locations overlaid on top of the camera view, including ones that aren’t visible.
You can see if they are currently employed and how well they are rated. Not a brand new feature, but definitely a useful feature to be rolled out on a larger scale.
Google’s VP & GM at Chris Philips is now on stage talking about Google Maps. “We’re transforming Google Maps again,” he says.
The very impressive “Immersive View” is demonstrated, which we have seen before. It uses AI to fuse billions of Street View images to give you Superman flyovers of major landmarks and restaurant interiors.
The good news is that Immersive View is finally rolling out in multiple cities, including London, Los Angeles, New York, and San Francisco today, and more cities in the coming months. Now let’s see “Search with Live View”…
Generative AI is coming to Google Search and Google is giving more examples of how it will work.
For example, you can ask “Which constellations are the best to look for when gazing at the stars,” and then delve deeper into what time of year they are best seen. All very similar to Microsoft’s new Bing chat.
Google also speaks of generative images, which can create 3D images from still images. It says we can use them to design products or find the perfect pocket square for your new blazer. No details on how the rollout will be.
Developers also get a large suite of tools and APIs to build AI-powered apps.
Okay, we’re on “big language models” like LaMDA. It’s the force behind Google’s new AI chat service “Bard,” which it calls “experimental.”
As Google previously announced, a lightweight model of LaMDA for “trusted testers” will be released this week. There’s no news yet on when it will launch publicly, aside from the “coming weeks” that Google mentioned earlier this week.
Google Lens gets a big boost. For months to come, you’ll be able to use Lens to search your phone screen.
For example, long press the power button on Android phone to search a photo. As Google says, “if you can see it, you can search it”.
Multisearch also allows you to find real-world objects of different colors – such as a shirt or a chair – and is being rolled out globally for all image search results.
The message right now is that Google has been using AI technologies for a while.
A billion people use Google Translate. Google says many Ukrainians seeking refuge have used it to help them navigate new environments.
A new “zero-shot machine translation” technique learns to translate into another language without the need for traditional training. Using this method, 24 new languages were added for translation.
Google Lens also hit a major milestone – people are now using Lens more than 10 billion times a month. It’s not new anymore.
Senior Vice President Prabhakar Raghavan speaks onstage about the “next frontier of our information products and how AI is driving that future”.
He points out that Google Lens “goes beyond the traditional notion of search” to help you shop and place a virtual armchair you want to buy in your living room. But as he says, “search never gets solved” and it remains Google’s Moonshot product.
Exactly then, only two minutes until Google’s live stream switches off. It’s unclear why Paris was chosen as the venue – but maybe it has something to do with those hinted map features…
4/ When people turn to Google for deeper insight and understanding, AI can help us get to the heart of what they’re looking for. We’re starting with AI-powered features in search that distill complex information into easy-to-understand formats so you can see the big picture and then explore more pic.twitter.com/BxSsoTZsrpFebruary 6, 2023
Only 15 minutes until the start of Google’s Live from Paris event. One of the big questions for me is how interactive Google’s conversational AI will be – in the new version of Microsoft Bing, the chat results will gradually become more detailed.
This is a big change from traditional search because it means your first result can be the start of a longer conversation. Will Google cite its sources the same way the new Bing does? The first screenshots were unclear about this, but we will find out more very soon.
So what exactly do we expect from Google today? Search will clearly be the big topic as we begin to learn more details about how Google’s conversational AI will be integrated into search.
Any big change to search would clearly be a big deal, as Google hasn’t changed much about the external UI of the minimalist bar that most of us tap without thinking.
But we’re more likely to see baby steps today — Google has labeled Bard an “experimental” feature, and it’s only based on a “light model version” of LaMDA AI technology (which is short for Language Model for Dialogue Applications, if you were wondering).
Like Microsoft’s new Bing, any integration of Bard with search will likely be presented as an optional extra rather than a replacement for the classic search bar – but even that would be big news for a search engine that has it 84% market share (opens in new tab) (at least for now).
Good morning and welcome to our Live from Paris live blog from Google.
It’s another exciting day in AI land as Google prepares to counter Microsoft’s big announcements for Bing and Edge yesterday. This week’s tussle reminds me of the big tech battles of the heavyweights in the early 2010s, when Microsoft and Google traded petty attacks on mobile and desktop software.
But this is a new era and the square circle is now AI and machine learning. Microsoft seems to think it can get ahead of Google search — and against all odds, it actually might. I reserve judgment until we see what Google announced today.
This article was previously published on Source link