odysseus2000 wrote:It will be hugely indicative of future AI if non Lidar systems can not work better than humans. In the UK humans kill or seriously injure about 10 people per day with cars, massively more than airplane accidents & so the prize for anyone who can make a low cost non Lidar system is colossal.
If non Lidar systems can not be made to work it will be a huge tell for the future of AI & will likely send off the advent of mass robotic driving long into the future as Lidar systems at wavelengths that do not damage human eyes are expensive & if one needs Lidar one presumably needs redundant Lidar. It is not clear to me that Lidar systems as used by some robotic cars can be made low cost in mass production, but happy to be corrected if this is wrong.
I think you're way off of the mark in understanding where AI comes into the game.
None of the serious self driving contenders will be using "AI" as a single black box. The won't just feed a load of camera inputs into a deep learning CNN and then drive around expecting it to extract every aspect of driving! e.g. learning that it needs to recognise road signs, learning what speed limits are and how to recognise them, learning the rules of junctions, etc.. that would be very naive. (A nice experimental project for a research institute, but definitely not how any serious real world effort would approach developing a self driving car!)
So lidar or not will not tell you anything about the future of AI!
As I mentioned in another post a long while ago on this board (different thread), the modern revolution in AI basically provides engineers with new building blocks for which there was previously no equivalent. It's true that you couldn't make a self driving car without these building blocks - there's no alternative non-AI solution (e.g. to assimilating and interpreting visual information). But how you assemble these building blocks into a working self driving system is standard engineering rather than AI.
The AI in modern self driving cars is (I would hope) modularised into a more traditionally engineered system. Different components responsible for different aspects... one or more components for recognising and reading road signs, like speed limits, etc.. one or more for simply looking where physically it would be possible to drive (independent of human rules)... others looking for the human rules that constrain where and how to drive...
I would expect any real-world self driving car attempt to have inside the software/computer an explicit model of the environment around the car. Something that can be presented to the user, or to a crash investigator, or even just to the original developers as they debug their system.
This may sound "well durr!"... but deep learning CNNs
don't of themselves provide that!
One of the criticisms that's always been levelled at artificial neural networks, is that you can't 'unpick' the 'weights', you can' t 'see' what it's 'thinking'. And while there have been a few research attempts to get around that (you've probably seen the psychadelic dreamlike pictures that google put out -
https://www.ibtimes.co.uk/google-deepdr ... ne-1509518), they are nowhere near useful for engineering purposes
This all comes back round to - you shouldn't think of self driving cars as an "AI" (singular). They are still a system engineered from components. Some of the compenents may be revolutionary - like the ability to identify and read speed limit signs in real time, to identify traffic lights, or zebra crossings, etc. But these will still feed into other higher level controllers in a traditionally engineered way.
There is not going to be a single 'AI' in a self driving car; rather a collection of specialised modules - some AI some not, all combining to provide the self driving system.
Some AI modules will perform their AI task better than others. I can fully imagine in a few years that people will talk about adding particular AI modules, or talking about what AI modules they have, etc, like people talk today about what type of tyres they have, etc.
The non-lidar aspects clearly can be made to 'work' .. as in do the driving task equivalent of a human being, but we know humans make mistakes.
Lidar on self driving cars is really akin to automatic emergency braking on non-self driving cars. It's just there to cover anticipated exceptional shortfalls.
We accept humans driving without automatic emergency braking, but now the technology exists, I can see that like ABS, etc, it will in the near future become compulsory for none self driving cars.
And as such there's no reason why such systems shouldn't also be a compulsory complement on self driving cars as well.
It's not a 'redundant' system in that it can't actually take over from the AI self driving aspect. It's just a safety fall back for exceptional circumstances.
As long as the lidar has self diagnostics that can establish whether it's working correctly or not, there isn't then a need for dual redundancy or anything like that.
If the lidar indicates that it isn't working, then the non-lidar controller is robust enough to either just pull over to the side of the road, or even potentially make it to a nearby garage a few miles away.
The car isn't going to fall out of the sky if the lidar stops working!
As for the issues you have with lidar, only you seem to be particularly worried by them. I haven't seen any serious sources raise concerns over damaging human eyes - and there are plenty of lidar systems out there on the roads on prototypes which would not be permitted if there was any risk of damage.
The race is currently on to get lidar cheap - and progress is rapid. I watched a (industry) video a few months ago which seemed to imply that it's now a sprint to the finish with lidar. The cost is already getting reasonably low - would already be viable on top end cars. Now it's just the standard engineering processes of getting it standardised, volumes up and therefore cheaper.
In response to other comments on the thread...
As for the general question of redundancy (where did the comparison with airliners come from)... I mean, cars are on the ground, and only carry typically 4 to 6 people. The risk assessment is entirely different.
The risk with a self driving car and failures, is going to be more towards identifying the failure, rather than having redundancy to take over. A car can simply put on a self-driving equivalent of hazard lights (which I'm sure will be developed) that basically says "Hey, I've detected a fault and will be stopping immediately" warning everyone around. And then it just simply applies the brakes and stops.
There doesn't need to be another replacement system there to take over. All that self driving cars need is the appropriate self diagnostics to check that they are functioning as per spec.
Things like checking cameras are functional, and providing the appropriate coverage (pointing the right way), not obstructed, etc. And that integrated lidar is appropriately aligned, etc. And that all electrical circuits are functioning correctly.
There doesn't need to be 'redundancy' of the kind used in aircraft ... in self driving cars, it just needs to be able to detect a fault then stop.