BD,
Re starting with the right amount of sensors I humbly disagree. Starting with far too much will simply cause dev team distraction, rework, and increase cost and time. To use an extreme example lets say a warship could have a radar, a sonar, and a opto-tracker; and the job is to create a AI-based short-range missile defense system. You'd start by ignoring the sonar data as it will simply be irrelevant for 99% of the short-range situations, and integrating it would be a very challenging job. So instead you observe that you can get a viable solution with just the two and crack on.
That tome as you call it is a real goldmine, and I spent a few hours wading through it and the various links yesterday. As best I can see the market leaders in autonomous vehicle systems are Waymo (Google/Alphabet), MobilEye (Intel), and Tesla, plus the wildcard of Apple and various Chinese. There are many interesting things about the scene, including Intel's assessment that they can sell each kit for a few thousand $ (sensors + compute + software + data services) in return for which the OEMs add $5k to the price tag; and the issues behind the MobilEye / Tesla split (which is far more complex than you suggest, imho). Oh and the tech is interesting to me given my background
Also interesting is the development pathway where Tesla pushed the early kit to (or beyond) its intended limits and by doing so released L2 earlier than everybody else, and in volume (450k vehicles by end Q3 2018), with the consequences we see. This in turn has caused the industry view to evolve. Previously industry saw the L1 / L2 technologies as merely being driver assist, and so whilst they developed them they were basically hobbled and put aside whilst industry pushed to get to L4/L5. But then Tesla pushed the L2 technologies to their logical limit in their HW1 using a mix of MobilEye and Tesla stuff which in practice was used both on segregated roads (aka controlled access highways with no crossing traffic) and on non-segregated roads. The spat between MobilEye and Tesla caused Tesla to switch to HW2 and rebuild their code-base, and now they are (with software 9) delivering what is generally called L2+, i.e. good enough autonomy for a lot of drivers doing a lot of driving. Interestingly MobilEye/Intel have now realised that they cannot ignore L2+ as a pathway and so they have 15-20 or so projects that are beginning to come to market to deliver a comparable functionality (e.g. GM Cadillac Super Cruise is a MobilEye system). One can niggle about which of these individual systems is best or better than the others, but they are all intended as L2 and/or L2+.
What will be interesting is how easy it is to get from L2+ to L4/5. MobilEye and Waymo think they have the true kool aid for this, and think that Tesla is in a developmental cul de sac. Reading around I think that Tesla have a good enough handle on the semantic roadscape, and on hi-res map creation, and on rules-based-driving (witness high-speed auto lane change) which are the three basic elements of the tool kit. So Tesla obviously have a competency in all three and they are getting plenty of practice in real life.
Remember of the three Tesla fatalaties everyone keeps on going on about. At least two of those were running MobilEye + Tesla on HW1. I'm not sure about the third. The first may or may not have been switched on so unclear.
https://en.wikipedia.org/wiki/List_of_s ... fatalitiesWhat is interesting is that everything coming to market in the 2019-2020 period running L2 / L2+ will be doing so without an integrated LIDAR as far as I can see. There is no industry consensus on whether a LIDAR is required for L4/L5.
It is interesting to have delved into this as it is one of the three more immediate pieces of the Tesla value proposition.
regards, dspp