How AI will allow us to talk to our cars

spot_imgspot_img

aria-label="ai talking to cars"

Google and Continental’s system allows us to ask cars questions without taking our eyes off the road.

A phrase that pops up a lot these days is ‘the software-defined car’, or ‘software-defined vehicle’.

On the face of it, that sounds like manufacturers embracing the idea of enabling functions like heated seats in return for extra cash from the buyer, something that’s been the subject of debate recently.

But it’s not all about that. The idea is that by basing what a car does and the way it works more on software and less on hardware, functions like ADAS, infotainment, anti-collision systems and interfaces can be changed or improved without taking the car apart or making new parts.

In theory, that should be better for us, the car buyers. It’s already happening and, for example, the design of a touchscreen and the functions it enables can be upgraded by flashing new software into the car, much like loading a different app on a phone or computer.

It also enables the introduction and updating of AI into a car’s human-machine interface, to make it easier for a driver to control aspects of the car by voice commands that actually work, for example.

Google and giant supply firm Continental announced at the 2023 Mobility Show in Munich recently that they are to collaborate on introducing Google’s data and AI technology directly into vehicle computers. The system will be based on Continental’s new Smart Cockpit High-Performance Computer (HPC).

The idea is that generative AI will allow occupants to talk to the car in natural speech, without having to formulate clunky phrases or conform to a particular vocabulary. The recognition and processing are done in the cloud, drawing on the immense power of Google’s computers rather than that stored on a puny in-car processor.

One of the key differences between this and what’s gone before is the way the driver can communicate with the car. The driver can ask follow-up questions without repeating the whole context because Google Cloud’s generative AI takes care of that. The system won’t be entirely vehicle-agnostic, though, and each one has access to information specific to that car or model’s instruction manual.

So if the driver wants to ask the location of a USB port or what the tyre pressures should be, there should be a quick and accurate response. Otherwise, the use of conversational speech should make it easier to find out about local places of interest or anything else you want to know.

Although the combined solution will be along in around 18 months, the HPC architecture is already out there. (The Volkswagen ID family is an example.)

What makes the HPC flexible and presumably more supportive of the software-defined car idea is that it can run multiple different types of software from different suppliers at the same time. And if it works well in the real world, Continental and Google’s generative AI will hopefully make interacting with a car easier and less dependent on taking eyes off the road.

Toyota 222D – the Group S Rally Car

This 560kW rallying MR2 could have seen Toyota conquer the stages, but instead fate intervened
spot_img

Further Reading

Maserati’s GranCabrio Folgore is an electric drop-top with MC12 power

The range-topping Maserati GranCabrio Folgore has been unveiled as the first all-electric open-top GT