odysseus2000 wrote:Yes, I understand what you are saying, but I think the composite of systems will feed into a "decision engine" that will decide what to do. I imagine that this will be called an AI system and if useful will be able to drive without any human in the vehicle and do so more safely than if it was human driven
I'd agree to an extent.
But between the sensors (including AI for identifying road signs, cars, etc), and 'decision engine' I would expect a central model where all the sensor feed into a combined world view.
Like this kind of thing, with overlays on indicating what the system has sensed, etc...
http://lidtime.com/wp-content/uploads/2 ... amnewm.jpg The decision engine would then make decisions based on this model, but I would fully expect that it will include non-AI aspects as well as AI aspects.
For example, simply physics, just making simple predictions ... i.e. if we keep going this speed will that pedestrian be out of the way, etc.
There may be an element of AI in there. For example, the AI is likely to model behaviours. Like dogs, cats, children, cars, etc, and be able to provide a 'probability' prediction of where they might go. But I'd expect the results of that to feed into more traditional statistical models.
They may then feed into more AI.
But you'd also need to be able to incorporate the rules of the road. And these might change.
Or even just change moving from one country to another!
So again for those reasons alone, i would expect a degree of more traditional engineering at the top level.
I wouldn't image that self driving car providers would want to have to retrain the CNNs from the start if for example left turn on red became legal in britain (as per right turn on red in the US)
That kind of action I would expect would be integrated with a rules engine or explicit 'action model' of some sort that is applied is recognised situations.
Or even behaviour around smart motorways and their hard shoulders. (Although this I would expect should be handled mostly be reading and interpreting signs; a self driving car will need to be able to recognise diversions, check points, lanes times of operation, etc, all dynamically)
Anyway, in general, for practical engineering reasons around adapting to and implementing rules of the road, flexibility, etc, I wouldn't necessarily rule out some AI in there at the decision level, but I probably wouldn't expect it to be a single 'AI'.
And also for accountability reasons, I'd also consider it less likely that AI (as in CNNs, etc) would be used as the final decision maker.
If the car makes an 'odd' decision and crashes, it wouldn't really be acceptable for the engineers to shrug their shoulders and say "Dunno, that's what the CNN decided to do".
You could get away with it a the lower level, e.g. if a CNN misread a sign, or misidentified something around the vehicle - particularly if anyone reviewing the footage agreed that it would have been difficult for a human to see.
But at the top decision level, it really is more important that engineers can show that the final decision is robust in terms of going in a direction where the lower levels believed were drivable, the rules engine says is legal, the capability engine tells it the car has the capability (e.g. acceleration, or power up hill, etc), the cameras say there is no pedestrians, etc.
Maybe it could be done with a dual system... an AI making it's decision in a fuzzy opaque manner, but having to go through a final validation check that ensures all the above constraints are met, etc.
So you might not be able to tell why it chose a particular legal action (out of many potential ones), but you can at least be sure it is safe and legal.
I have to admit though, details of the 'decision engines' that are being developed by the various groups are much harder to come by on the internet, and the options more open.
For example at this level, you'd also want to consider direction finding. I would expect there would be some element of feeding back external detection information to updating known location on a map for areas without GPS or poor GPS.
The navigation system would also need to be able to cope with incorrect or out of date mapping information, or road changes that are under construction.
You might also want to have it consider energy efficiency in its considerations.
You might also want it to consider comfort for the passengers. Or perhaps a mellow, slow into the corners, braking early, etc, for those nervous of the technology, and a more aggressive setting for those who trust it more and need to get to work quicker, etc
These are all things that would (likely) be better incorporated/merged by more traditional means rather than through AI.
Thinking about it, with all the various aspects that you'd expect or could develop in a self driving car at the 'decision' level, the more I think about it, the less I'd expect it to be primarily AI based, and more likely to be predominantly traditional (software) engineering based.
The AI game changer that makes self driving possible (where it wasn't before) is really in the low level detection and recognition of real world objects, in real time from camera information. To be able to identify them real time, and position them in an internal model of the real world.
Getting that internal model (from the low lever) is where AI is the game changer.
The decision engine is probably what is going to make or break the various players in the game. Get the decision engine right, and it will boost your chances, get it wrong and you'll struggle.
And I don't just mean safety.
I mean, if company A's system can work without GPS, and reliably update it's position based on using the feedback from its sensors (e.g. watching for road junctions, etc), whereas company B's really struggles or doesn't even attempt to update position other than GPS, then company A is going to have a clear advantage.
If one company provides the ability to adjust the ride for speed or comfort, etc, whereas a competitor just provides take it or leave it, the first company is going to have a competitive advantage from that.
Then there's also 'soft' feedback. For self driving cars trust is going to be key. A car that provides clear, reassuring feedback for example on screen that the passengers can see reflects reliably what they see around them, then they are going to feel far more at ease, than a competitor that doesn't have such a clear model presented to the passengers.
Similarly a car that reassures passengers why it isn't going in a particular direction, that will also help build trust and give an advantage.
But these latter two in particular would be less feasible if you're letting a single (deep learning style) AI do the final decision making.
I think once you've got the internal model from the low level sensors and AI, then apart from perhaps adding in behavioural expectations from AI, most of the decision making is more likely to be regular engineered software.