Remove ads

Introducing the LemonFools Personal Finance Calculators

AI endeavours

The Big Picture Place
odysseus2000
Lemon Slice
Posts: 741
Joined: November 8th, 2016, 11:33 pm
Has thanked: 130 times
Been thanked: 93 times

Re: AI endeavours

#102820

Postby odysseus2000 » December 8th, 2017, 6:25 pm

johnhemming

An interesting question is, of course, what self-awareness means. I am sure that almost all animals have some awareness of self (or the hive).


Yes, but they can't do much about their situation in life.

An AI that becomes self aware might be able to do something about anything it considers a threat.

Everything that has threatened humans has been either removed or is monitored to allow action if it becomes a threat.

Would AI consider humans like we consider vermin?

Regards,

JMN2
Lemon Quarter
Posts: 2164
Joined: November 4th, 2016, 11:21 am
Has thanked: 333 times
Been thanked: 301 times

Re: AI endeavours

#103829

Postby JMN2 » December 13th, 2017, 8:11 am

Kyle Bass chatting with Mark Cuban, topics include AI and cryptocurrencies.

https://youtu.be/PAcZPUjLdf4

onthemove
Lemon Pip
Posts: 51
Joined: June 24th, 2017, 4:03 pm
Has thanked: 8 times
Been thanked: 34 times

Re: AI endeavours

#105177

Postby onthemove » December 19th, 2017, 6:16 pm

For me, AI is a very interesting topic - I studied it at university, then spent the first decade of my career realising that AI was treated as a joke, the old cliches about automated phone menus where you have to speak words, and then the computer does a terrible job of recognising those words etc, had you screaming for a real person.

Yet, here we are, and seemingly out of the blue AI has now become a buzz word. Everyone is scrambling now to put the word AI into everything - if you haven't go AI in whatever you do, you're behind the curve.

Reading through this thread highlights the problem. There's a lot of 'unknowns' and we are at serious risk of silly legislation being introduced and so on. And what does all this mean for investors as well?

So the first thing is being realistic. Why is AI suddenly such a buzz word, after decades upon decades of hollow promises?

History

Pre-noughties, AI had a problem. There were lots of interesting ideas, and AI could do some tasks very well. Back in the 20th century AI had already beaten chess grandmasters.

The problem with that AI was that it was just number crunching. Follow simple rules, and throw enough computing power, and in a constrained world like chess - composed of very straightforward rules of the game - crunching numbers allowed computers to outperform the best chess players in the world.

The problem was, when the same approach was tried with things like computer vision, it failed miserably. Computer vision seemed just so hard.

Back in the 1990's neural networks ('connectionist AI') were struggling to take off. Perceptrons (a popular type of neural network) were taught from the theory that anything that could be done in 4 or more layers could be theoretically shown to be do-able in 3 layers of neurons. Further more, each neuron in the input layer tended to connect to all neurons in the subsequent layers ('fully connected networks').

When I studied it, it felt like if you submitted work with more than 4 layers, your tutor would be wondering whether you'd understood the course material, with all the meticulous proofs that it could all be done with 3 layers, and so on.

The Quiet Revolution - Convolutional Neural Networks

The current fashion for AI is almost solely based on these. These are also more coloquially termed 'deep learning'.

Strictly speaking, these aren't completely new. In fact, people have toyed around with them since the 1950s!

But a number of things and realisations have recently come together.

(1) The 'convolutional' aspect is important - instead of a 'fully connected' network with neurons looking, say, at the top left of a picture being trained independently of neurons at the bottom right of the picture, a convolution approach takes the view that we don't know where our 'cat' (or whatever) might be in an image, so the same 'neuron weighting' is applied - 'convolved' across the entire image, at least in the first few layers of the network. In the first layers of the network, instead of many different sets of neuron weightings, a single set of weights is repeatedly applied (convolved) across the entire layer (start at top left, apply it, shift it one to the right, apply it, shift it one to the right, apply it ... and so on).

(2) Depth Matters. AI programmers have finally broken free of the rigid idea that everything can - or rather 'should' - be reduced to 3 layers. Going against what I was taught in the 90s (no matter how theoreticaly 'provable' it might be), the realisation today is that in practice, 10 or even 20 (or more) layers of neurons can potentially arrive at a solution far quicker, and better, than a 'theoretical' 3 layer solution. Theorists haven't yet determined the science behind 'why' this is the case, nor how to find the 'best' architecture for a particular problem. But real world experiments have left no doubt! The theory can come later.

(3) ImageNet. This is the game changer. ImageNet is a massive database of images that have been meticulously labelled, and can be used for testing machine vision. There have been yearly challenges for quite a while now. Initially no-one seriously used neural networks. Initially they used more 'traditional' techniques where programmers carefully programmed their software to look for pre-determined patterns, etc.

For a couple of years, the resultant programs performed noticeably worse than a human. And any improvements from the previous years, were small tiny incremental improvements. Certainly, reliable computer vision still seemed an age away.

But then,...

https://qz.com/1034972/the-data-that-ch ... the-world/
Two years after the first ImageNet competition, in 2012, something even bigger happened. Indeed, if the artificial intelligence boom we see today could be attributed to a single event, it would be the announcement of the 2012 ImageNet challenge results.
Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky from the University of Toronto submitted a deep convolutional neural network architecture called AlexNet—still used in research to this day—which beat the field by a whopping 10.8 percentage point margin, which was 41% better than the next best.


Suddenly everyone took notice - this was a revolutionary change in computer vision performance on general image recognition tasks. Suddenly computers were now in the ballpark of matching humans.

Since then, the majority of entries to the imagenet challenge have all switched to convolutional neural nets of some variant or other, and their capability is now actually - just - able to surpass that of human beings.

Where we are now...

All of a sudden, using computer vision to identify pedestrians, cyclists, cars, lorries, buses, street signs (including reading them!), traffic lights, dogs, cats, etc..... all these are now very possible with around the same level of reliability as a human being.

This was the pivotal moment that meant fully autonomous cars went from being something from distant science fiction, to becoming a very, very real probability.

It is now just a race to be the first to - safely - get autonomous cars to the market.

Crucially, from an investors point of view, without any major catastrophe that completely undermines public perception of the tehnology. No mean feat, when the end goal isn't perfection. They just need to be better than the average human driver.

But since the average human driver makes occasional mistakes, autonomous cars can still be massively beneficial even if they have the occasional bump. Nobody is, or can, claim they will be perfect.

The problem for the techonology - and investors - is that any occasional bump will be jumped on by the world's media like a pack of hyenas.

Google (Waymo) seem to recognise this, and - in my view - are taking a very prudent, careful approach. They realise one big cock-up could set back their attempts quite literally years, and encomber the industry with draconian, restrictive legislation, that could potentially prevent the technology ever being allowed to reach its potential.

Uber on the other hand - in my view - seem to be taking quite a cavalier approach, and rushing cars onto the streets without first gaining public approval - not just from the authorities, but also the approval of general public opinion. And by doing so they pose not just a real risk to their own attempts, but to the more sensible attempts of others as well. If they have one bad accident, it could seriously turn public opinion against driverless technology and AI in general.

But Get Real...

There hasn't been any massive jump towards sentient AI.
That is still yonks away.

There hasn't even been any massive jump towards any form of general intelligence.

That is still a while away - though there are some small scale, rough attempts. But like the neural nets of the 1990s, todays general AI are still at the noddy stage. They are still waiting for their revolution, and there are no signs that it is at all imminent. Contrary to the impression the current AI buzz might give.

Nothing in the current AI revolution is putting us at risk of computers becoming sentient, and 'breaking their programming' and taking over the world.

There's no threat from the current state of the art technology - at least not from the technology itself turning against us.

As always, there may be threats from how humans might apply the technology, but be under no illusion - any bad effects from the current AI revolution will be entirely due to the will and labour of the humans behind it - just like any technology.

Quite Simply..

The current revolution is based on a revolutionary change in one particular aspect - convolutional neural networks. And in particular towards visual processing tasks.

These superbly fill in a number of 'black boxes' in engineering terms. They provide a functionality as one part of a modular system, that was not available before. Suddenly we have a few more boxes that provide limited - but incredibly useful - functionality that we didn't have before.

You can now have your £40 digital camera identify your face, and recognise when you are smiling. Even 15yrs ago, that was still complete science fiction with the state of the art back then.

It isn't an understatement to call convolutional neural networks a revoution.

But don't worry - they haven't turned your compact camera into a sentient being working out ways to kill you off.

All it has done, is provided the engineers who made your camera with a module (may be implemented in software, or may be implemented in a dedicated chip) that they can incorporate into the camera that can take the input from the camera pixels, and output where it thinks faces are in that picture, and whether each of those faces are smiling.

It is simply then up to the engineers to decide how they incorporate that into a product. They can simply use the output from that module in their main camera program, programmed using regular techniques, to decide when to trigger the shutter.

Similary, that is how self driving cars will work, albeit there will be much more 'engineering' work to build a reliable system which can also be programmed to take into account changes in the rules of the road, car handling, etc. But that is more straightfoward engineering / programming, than simply builds on top of the image reconition AI.

That's not to say that other AI techniques won't be brought into play. For example, for planning paths, etc. But these aren't quite so revolutionary, more evolutionary (and I don't mean in the terms evolutional algorithms - there's been no revolution there).

So where does that leave us as investors

Well, the obvious application of this AI revolution is autonomous cars - these were clear science fiction before. Now all the parts are in place. The race is now on to make it happen.

Waymo would be my obvious candidate, but they aren't making the cars themselves.

And it seems that all the car manufacturers recognise that self driving is now a matter of when rather than if, and they are all now ploughing billions into it.

I can see two probable outomes...

1. Waymo becomes the standard - they develop the technology and licence it to the rest. They are certainly going about autonomy the right way in my view. They fully recognise the potential risks - both in terms of public opinion, and also the risk to life of users of their technology - and seem to be following a very pragmatic approach.

2. Each car manufacturer manages to develop their own - which they all seem to be trying to do - and actually, autonomous car technology from an investor point of view, would then just be an investment in the car companies themselves .... Ford, Toyota, and so on. It's just another aspect of technology that goes into cars.

I can see investment risks with each....

Waymo seem to be doing brilliantly with object recognition, place finding in the real world (GPS is useless for local lane navigation), and path planning around what other road users are doing. Where I think they may struggle is in car handling - the actual driving. Waymo are looking for a technology they can sell to other manufacturers. But half of driving is about how the car handles. How quick to you turn the steering wheel, what's its turning circle, at what point will it lose grip on a wet, snowy road.

Other car manufacturers, have been developing traction control, assisted steering, ABS braking systems for a while now.

The question now is which is going to be the bigger challenge. Google seems to have got the high level pedestrian recognition and object avoidance reasonably well sorted already.

My gut feeling is that the remaning challenge, particularly for cars in the UK and other colder wetter regions - away from California - is going to be the automated *driving* aspect - the actual 'hands on' control of the vehicle. And that might actually be to the advantage of the existing main car manufacturers, who already have a lot of experience developing (safety) technology related to those aspects.

And the Hype

Technology is always improving. Other AI techniques are making incremental improvements.

But if it weren't for the convolutional neural nets, I don't believe we would be seeing this current AI euphoria.

In other words, I do believe that the current convolutional neural networks will have a transformation change.

But it will be limited, in the sense that there won't one or two companies holding all the patents.

In fact, most of the actual AI stuff, is in the public domain and free. The patents and protections are coming from associated technologies (like the Lidar systems in waymo cars, etc), not the AI itself.

We currently have Microsoft, IBM, and Google, etc, all trying to sell AI services.

The intersting question is how much of a monopoly will they have - how much will people need to use AI services, compared to how much the AI will instead be incorporated, e.g. into integrated circuit boards that can be embedded into other electronics.

The 'value' from the AI services is not from the "AI" technology itself. It isn't from the convolutional neural network architecture.

The value comes from 'training' of it. The value is from havin a trained network that recognise objects.

But google, etc, can only really make that general purpose. Specialists might want a dedicated convolutional neural network trained to recognise e.g. brain tumours. But then it is going to be the specialist that is going to need to train that network. A general network brilliant at recognising cats, or models of car, isn't going to be much use identifying a brain tumor in a scan.

And this might be the achilles heel of the AI technology from an investors point of view.

Yes it's likely to be revolutionary. But that revolution is likely to be from a broad use across all industries. I'm not sure that there is going to end up a single commercial entity that owns and controls access to a single massively intelligent AI to which everyone will necessarily need to connect to.

I supposed after all that, as an investor it gives me hope for the general future that there is a lot of scope for companies to massively innovate and provide new features, technology and do things in a far more efficient and effective way.

So there is potentially a lot of scope for general economic growth.

But I'm not sure that there are any clear indiviaul winners in terms of being the controlling owner acting as a gatekeep to such technology.

From a worker's perspective, an AI future isn't something that should be feared. AI will be just another tool we all use. It will be valuable, but fragmented. It benefits being realised in many areas but through the work of many people crucially being aided - not replaced - by it.

The fears of it taking over and wiping out mankind, are massively overblown. That really is simply fear arising out of ignorance.

In terms of society, it is potentially going to be transformative. Easily trainable image recognition has potentially enormous numbers of beneficial applications. Medical image diagnosis, face recognition, automated monitoring - defect monitoring in factories, etc. More complex OCR - recognising not just text, but potentially diagrams, and drawings as well.

The same technology behind it has been adapted to other image processing functions as well.

There are various examples of work involving depth estimation from a single, monocular image. I can't find it now, but a while ago I saw a video of a remote controlled toy car self driving around a campus, avoiding obstacles, solely using a single 2d video camera, with a convolutional neural net estimating the distance of obstacles on a single frame-by-single-frame basis, etc.

Similarly there are other examples of 3d scene reconstruction from a single 2d image. And other examples of 3d models of faces being generated from a single 2d image ... you can even try it yourself! http://cvl-demos.cs.nott.ac.uk/vrn/

Adobe is using image recognition technology to isolate entities in videos, removing the need for laboriously having to do this by hand.

This will open up a whole new world in movie visual effects - or at least, make a previously niche, expensive world, available to even the lowest of budget film producers. Easy to add people in, take them out, change their clothing, etc.

Once you can have the computer easily and automatically recognise independent objects, a whole plethora of opportunity opens up in graphics and video packages. And consequently whole avenues open up to advertisers, graphic designers, etc.

There is even work that attempts to use machine learning (convolutional neural networks) to generate predicted motion from a 2d image... https://www.theverge.com/2016/9/12/1288 ... iction-mit

The scope for economic growth out of this is huge.

But is it all what it seems?

So the technology is accessible to all.

However, there may be one saving grace for investors.

Convolutional Neural Networks are relatively fast and cheap computationally when you have them already trained up. That's why your compact camera can find your face without needing a supercomputer.

But training the network in the first place is a whole different ball game. And this may be of use to investors.

Although the algorithms are freely available, if you actually want to create your own networks, you potentially need a lot of computing power to train them up. Programming the underlying network itself is the easy bit - training the network is the hard bit.

And this is basically where the big IT companies are pitching it. They may superficially sell "AI" services. But the reality is, the AI is actually relatively free and easy to implement, and for the most part isn't protected by patents, etc.

The reality is, the services are just a front end for their cloud computing. What you are actually paying for is the CPU time.

For example when google boasted about AlphaZero beating the worlds best with only 4hrs training from nothing (https://www.theguardian.com/technology/ ... four-hours ), it wasn't really the AI algorithm they were showing off.

They were really showing off the immense brute processing capability of their cloud computing platform.

Cloud computing predates the current AI revolution.
Cloud computing almost felt like a solution looking for a problem.

A cynical person might wonder if the current buzz around AI might potentially just be a way of making you think you have a 'problem' for which you might need a processor hungry, cloud computing solution.

...

Sorry. I hope that isn't too long. It just seemed a good point to try and collect my own thoughts having been keeping a close eye on the recent AI revolution quite closely myself from a technical perspective.

Clitheroekid
Lemon Quarter
Posts: 1325
Joined: November 6th, 2016, 9:58 pm
Has thanked: 465 times
Been thanked: 928 times

Re: AI endeavours

#105190

Postby Clitheroekid » December 19th, 2017, 8:17 pm

Many thanks for an informative and enlightening post.

I've been in and out of a company called Blue Prism (PRSM) over the past few months, whose share price has been volatile to put it mildly. As my knowledge of AI is pretty much limited to what I've just read I'd be very interested to hear your views on them as a company.

Whilst I read a lot about how successful they are, when the usual metrics are applied the market valuation seems completely insane to me. I suspect that like cryptocurrencies there are many `investors' who haven't a clue about what it is that they're buying and are simply doing so because AI is fashionable and if everyone else is buying it must be a good company.

It also seems to me that it's a very competitive field and that PRSM are simply one amongst many.

Incidentally, I appreciate that you may know nothing at all about them, in which case feel free to ignore the question! ;)

onthemove
Lemon Pip
Posts: 51
Joined: June 24th, 2017, 4:03 pm
Has thanked: 8 times
Been thanked: 34 times

Re: AI endeavours

#105222

Postby onthemove » December 19th, 2017, 9:53 pm

Clitheroekid wrote:I've been in and out of a company called Blue Prism (PRSM) over the past few months, whose share price has been volatile to put it mildly. As my knowledge of AI is pretty much limited to what I've just read I'd be very interested to hear your views on them as a company.


I knew nothing about them :^)

But still curious I've just looked at their website. Robotic process automation, and probably 30 minute drive from where I work, I was surprised I hadn't heard of them.

I hope I've found the right company...
https://www.blueprism.com/whatwedo

What follows is purely my reaction from what is publicly available on their website without registering. I acknowledge that sometimes marketing departments who make up content for such websites might be completely separate from the technical teams, so this might not be a completely fair appraisal - but it's all I've got to go from.

Firstly, their use of the term 'robotics' is stretching it. All their 'robotics' is simply repetitive software tasks. There's no hardware that I can see. And they seem to go straight onto the defensive to explain they're talking about software only 'robotics'. They know they're stretching the term.

From what I can tell it just seems to be a marketing gimmick for automated scripts.

There doesn't actually seem to be all that much AI in there. In fact the impression I am left with is that their AI functionality is merely that they are a front end for Microsoft Azure AI services.

Their product seems to be aimed at automating user interaction. It just seems to be a (moderately) fancy variant of automated testing tools that software engineers usually use to regression test user interface functionality. But in this case they seem to be using it to automate mundane data entry, or other similar tasks.

According to their videos they're only 'looking at' implementing 'non-deterministic' things ... which rather implies they are basically just automatically scripting clear, straight forward deterministic tasks - yup, just a fancy automated testing tool being applied in a production rather than test environment.

Their dismissive reference in one video to "...unproven AI technologies at the other [end of the spectrum]" leads me to feel they don't feel comfortable with state of the art AI.

If "AI" is the reason you are investing, this company seems to be at best a v. cautious consumer of (other companies') AI, not an AI innovator or developer themselves.

All in all, me personally I wouldn't invest.

The whole business, solely in my own personal opinion from what I can infer from a very brief visit to their website seems to be about automating the usually human interaction between different, disparate software applications.

There may (or may not, I've no idea) be a big demand for that.

I suppose their market could go one of two ways.

Possibility 1:
The original application developers start to recognise where companies like this are automating things, and instead provide the automation direct within their applications .... after all, it's expensive and time consuming to go to all the trouble of developing a user interface for humans, if at the end of the day it's only going to be used by a 'software robot'! Far more cost effective, less development, etc, to bring the disparate functionality into a cohesive single suite that can function as a single entity. Then demand for this companies type of product / service disappears.

Possibility 2:
I guess if there genuinely is a need for huge flexibility, application developers could develop 'modular' functionality, and then companies could use this 'process automation' to tie together this functionality into a cohesive business process. And then as the business changes, this 'process automation' can then be used to adapt and change with it. But I'm not sure I see any real evidence of this. On the contrary, software engineering is forever progression to 'higher' and 'higher' levels. The sort of things these are doing with 'process automation' should be relatively cheap, quick simple tasks for software engineers these days. And such modularity / flexibility, if it becomes the norm, shouldn't be requiring scripting techniques that seem designed to cope with human interfaces.

In summary ...

They may have found a current gap in the market. They may be making good money out of it. But, in my view, it seems to be providing 3rd party glue for gaps - or 'process paths' - that first party developers may subsequently recognise and ultimately close off.

In terms of AI, (from what they say on their website) I wouldn't consider them an AI company.

There was only a single glimmer of substance that might change my mind ... they did mention in one video about looking for anomolies in financial transaction data. This is potentially something that AI could be useful for - but they didn't elaborate, and from everything else I've read, I suspect any such AI use is potentially just shipped off to Microsoft Azure, with Microsoft providing the AI itself.

Though it never ceases to amaze me, that no matter how much we identify disparate software as being an issue where I work, managers repeatedly do keep creating more and more disparate software without trying to bring everything cohesively together. If other places are like that, there may be plenty of business for this company in the years to come.

And they may be able to use AI in delivering on that business ... but their dismissal of "unproven AI technologies" leaves me with the feeling that they are a little cynical and cautious of AI technologies ... not great if you're looking to profit from an AI boom.

All purely my initial reaction to reading the website I've linked to above. Absolutely nothing more.

BTW - I've now just looked at their numbers...

According to Hargreaves Lansdown ...
http://www.hl.co.uk/shares/shares-searc ... nd-reports

Market Cap : £775million
Revenue 2016 : £9.6million

Net Assets : £3.8 million
Profilt / (Loss) : (£5 million) .. 5x higher loss than the year before!

the market valuation seems completely insane to me


I wouldn't disagree. Though I'm usually more of a HYP type investor, so any price for any company that is making loss and not paying any dividend is expensive in my view. :^)

But this company had revenue of £10million and made a loss of £5million.

I haven't read any further into their accounts to see what's gone on there!

But I think I'll leave it here.

johnhemming
Lemon Quarter
Posts: 1297
Joined: November 8th, 2016, 7:13 pm
Has thanked: 3 times
Been thanked: 127 times

Re: AI endeavours

#105224

Postby johnhemming » December 19th, 2017, 10:00 pm

I have been using Google's streaming voice recognition on phone conversations and it works reasonably well, but still has quite a few errors. Phone conversations have the disadvantage of a sampling rate of 8K rather than 16K where the quality of speech recognition is greater. It tends to peak at that, however. Also it is generally 8 bit encoding rather than 16 bit encoding.

I have put up a test system which runs a conference and then puts the transcriptions into a chat (as well as doing TTS from the chat into the conference). If anyone is interested I can give a link.

TUK020
Lemon Slice
Posts: 391
Joined: November 5th, 2016, 7:41 am
Has thanked: 61 times
Been thanked: 148 times

Re: AI endeavours

#105226

Postby TUK020 » December 19th, 2017, 10:08 pm

Onthemove,
Thank you for a brilliant post. Much appreciate the insight.
Feels more like the introduction of electricity - productivity gains occurred over decades, because that was how long it took folks to reorganize things to properly take advantage of the new capabilities

Clitheroekid
Lemon Quarter
Posts: 1325
Joined: November 6th, 2016, 9:58 pm
Has thanked: 465 times
Been thanked: 928 times

Re: AI endeavours

#105229

Postby Clitheroekid » December 19th, 2017, 10:28 pm

Many thanks onthemove for taking so much time to give me such a comprehensive answer, I'm very grateful. Whilst I may continue to dip in and out I can't see it becoming a core holding any time soon!

odysseus2000
Lemon Slice
Posts: 741
Joined: November 8th, 2016, 11:33 pm
Has thanked: 130 times
Been thanked: 93 times

Re: AI endeavours

#106241

Postby odysseus2000 » December 26th, 2017, 9:50 pm

This is kind of interesting, not for the politics as the author goes on about, but for the people involved, begging the question as to what exactly needs such skill sets to over see it. Almost as if they have some kind of Manhatten like program on the go:

https://mrtopstep.com/why-is-alphabet-c ... f-defense/

Regards,

stewamax
Lemon Slice
Posts: 578
Joined: November 7th, 2016, 2:40 pm
Has thanked: 2 times
Been thanked: 94 times

Re: AI endeavours

#108025

Postby stewamax » January 4th, 2018, 10:03 pm

The real challenge for AI systems is not to use the speed of a special purpose computer:
- to execute rules in order to predict the outcome of a very large number of possible moves in order to ‘win a game’ (Deep Thought Deep Blue et al)
nor
- to play against itself a very large number of times and thus tune a deep neural net (e.g. AlphaGo)

but...
...when it has developed a successful system (neural net or whatever), to explain any underlying strategy it has ‘developed’

ReformedCharacter
Lemon Slice
Posts: 548
Joined: November 4th, 2016, 11:12 am
Has thanked: 148 times
Been thanked: 109 times

Re: AI endeavours

#108031

Postby ReformedCharacter » January 5th, 2018, 12:04 am

stewamax wrote:The real challenge for AI systems is not to use the speed of a special purpose computer:
- to execute rules in order to predict the outcome of a very large number of possible moves in order to ‘win a game’ (Deep Thought Deep Blue et al)
nor
- to play against itself a very large number of times and thus tune a deep neural net (e.g. AlphaGo)

but...
...when it has developed a successful system (neural net or whatever), to explain any underlying strategy it has ‘developed’


This article suggests the same:

https://www.nytimes.com/2017/11/21/maga ... tself.html

RC

odysseus2000
Lemon Slice
Posts: 741
Joined: November 8th, 2016, 11:33 pm
Has thanked: 130 times
Been thanked: 93 times

Re: AI endeavours

#108263

Postby odysseus2000 » January 5th, 2018, 11:27 pm

Long over due reply to onthemove’s article about AI, sorry so slow, but just been very busy.

The points made about how academia got AI wrong is worth noting as its just one of very many examples of the conservatism in academic research which makes much of it worthless. The future will be determined by the entrepreneurs and inventors who create it, what ever academia predicts will mostly be wrong. I write that as some one with a PhD and many years of academic research and I could cite many examples but don’t want to bore.

The current AI revolution in my humble opinion has begun from the computer vision example as cited by onthemove combined with vastly more powerful processors being churned about by Nvidia, vastly greater storage and vastly greater increased speed of idea dissemination plus the powerful use of AI by major corporations such as Amazon. Many of the leading companies in all industries now have substantial AI programs to direct sales and the logistics of delivery, control inventory and direct research efforts. The effects of these have been substantial. In the recent xmas period Amazon were able to ship a vast amount of stuff very timely. Of the things that I and various friends ordered there was universal early arrival, no late arrives and everything arrived perfectly. This in a country where many folk say that the infrastructure is broken and obsolete. Christmas 2017 said something very different.

Since we are only at the beginning of this revolution with robots (first ones probably will be machine driven cars) likely to have a transformation effect greater than any previous industrial revolution I believe it is impossible to predict what will happen in a years time, let alone 5. Tim Berners Lee has argued that the internet is potentially the nervous system of a super computer with the brains being just a few lines of code on top. The argument that AI will need to disseminate what it learns to humans who will then use it, seems a lot too slow given how fast computer operate and that they do so 24-7, 365.25 days per year. It may be that there is some fundamental limit that stops machines from advancing too much and requires that there will always be a need for human control, but for now I don’t see it. Indeed the ability of AI to look at complicated stuff like GO and develop new, previously un-realised, approaches gives an indication that AI will probably find ways of doing things that humans have missed or academic conservatism has weighed against. I suspect before too long AI will deal with all medical diagnostic data having robotically collected
it.

If we consider mice we have animals that are far less capable of doing things than we are, but which are nonetheless a considerable problem, self replicating, able to live in our dwellings and be difficult to clean out and major pests in agriculture. Clearly current AI is much less sophisticated than a mouse, but in specific applications, such as driving cars, AI can compete with humans something impossible for a mouse. This has all happened in about 6 years and the rate of AI innovation is exponential.

In terms of investment I would guess that most of the AI companies that have been born on this bandwagon of excitement will perish. There will likely be opportunities to make serious money within the current boom, but also opportunities to lose serious money, like the 1998 to 2000 period. AI looks to be difficult to protect with patents and such and so I expect a scenario very like 3D printing. A technology that was dismissed but which is now vital to many business and yet with few winners as the printers are commodities and sold at commodity margins which is great for buyers (I just ordered one) but bad for maker margins. Generally I do not like “picks and shovel” investments because most of the ones I have studied have not worked well. E.g. Several years ago I looked at many of Apple’s suppliers and the clear result was that the better bet was apple as most of the suppliers who generally create commodity products underperformed. However, for now Nvidia looks to be the best potential picks and shovel and has the other advantage that they are heavy into bit coin mining. Other potential winners look to be Amazon, Google, Walmart, Netflix. Some like MSFT too, but I feel their core software business is doomed and this could hit their cloud operations and moreover they don’t have the moats of Amazon et al. Tesla remains an enigma, loathed by many fund managers, but Musk has shown himself capable of extraordinary insights and business skill so i personally like Tesla.

What I expect to see shortly (next few years) are military AI robots capable of defeating human soldiers and Putin’s remarks that Drones will fight the next war doesn’t seem too daft at all. Atlas & other robots from Boston Dynamics (loads of videos on the Net) look quite primitive, but each iteration gets a little better and so like the first military planes that were of little use, that soon gave way to killing machines that won battles, I expect AI military to advance. If, as we see with machine driven cars, there is a clear objective and way of proceeding it is difficult for me to see how humans will be able to resist machines that can be protected with armour and deployed in ways too violent and or too exposed for humans. The arguments that machines will never be allowed to self kill humans seems weak to me. It was only a little while back that the US apparently bombed the hospitals of medicines without frontiers in the Afghanistan conflict and anyhow cruise missiles and drones now target and kill humans, albeit human directed, but does the General told to deal with some trouble makers care about anything other than killing the enemy?

None of what I think might happen, I am after all academically trained (see beginning) but the current industrial revolution is like nothing I have studied before. The scale of the internet far exceeds what I thought possible when it began, the speed and power of personal computers, mobile phones etc far exceeds what i thought could be done and the more I see of humans the more I feel they are prone to doing daft things.

Regards,

onthemove
Lemon Pip
Posts: 51
Joined: June 24th, 2017, 4:03 pm
Has thanked: 8 times
Been thanked: 34 times

Re: AI endeavours

#108425

Postby onthemove » January 6th, 2018, 10:02 pm

odysseus2000 wrote:...for now Nvidia looks to be the best potential picks...


I'd probably agree on that. I nearly wrote a follow up post to my initial post suggesting them. Where AI has been taught without the help of cloud computing, a lot of the time the libraries performing the number crunching now tends to get shipped off onto GPUs, and usually nVidia.

While the 'application' end - as opposed to the 'training' end of the process doesn't need such significant computing power, there is likely to become a market for dedicated, 'programmable' convolutional neural network ('deep learning') chips i.e. "AI chips", which can apply a pre-trained network very power efficiently.

I could imagine in future that one or two manufacturers - and nvidia seems well placed at the moment to be one of them - could become the standard, much like Intel has been for CPUs, etc.

"It may be that there is some fundamental limit that stops machines from advancing too much and requires that there will always be a need for human control, but for now I don’t see it. Indeed the ability of AI to look at complicated stuff like GO and develop new, previously un-realised, approaches gives an indication that AI will probably find ways of doing things that humans have missed or academic conservatism has weighed against."


The fundamental limit is engineering :^)

To be clear on the GO example, GO is very constrained. Although there may be a lot of strategies, the rules are clear and the domain very limited and fixed. There are a finite set of clearly quantised board positions. There is also a very clear turn taking methodology.

All this allows for an 'adversarial' approach to learning strategies. In effect the trainers have the networks play each other. But all the time, they are playing fully constrained in a very 'logical' clearly demarcated world. All the time, there is a 'controlling' program that is ensuring that they are playing to the rules of GO.

And that's the key point. To be useful, a deep learning network (or any other AI for that matter), needs to have a clear aim, a clear context. Just like any worker in a job - they need to understand what they are expected to do.

For example, if you took 20 arbitrary people, bought an empty factory building and just put those people inside without any instructions, it would be unlikely that you'd end up with a functioning business.

The people that developed that GO algorithm didn't use a deep learning algorithm that had learned to understand english and then told it "Go and learn to play GO". They provided an awful lot of 'scaffholding' in which the network was placed, so that it just learned to play GO. The network probably doesn't even know that it has learned to play go. All it has done is learned a mapping of inputs to outputs. It (the AI) doesn't understand what that mapping is for. It just knows how to do the mapping it has been trained to do. A mapping that is better than any human has managed. But still just a mapping none the less.

And this is where engineering comes in with AI.

The AI is providing a few more modules - functions - which are available to engineers to build into more complex systems.

For example, with self driving cars, it will be impossible in practise, to just take the appropriate sensors, feed them into a deep learning network, and then hey presto, magically that network does everything - your car will drive you from A to B.

That won't happen. Admittedly someone did try that, using neural networks, in the 1990's, with disastrous results. (I can't find the link now - with all the current buzz about self driving cars, it's getting lost in amongst all the current chatter ... but from what I recall, the car managed to stay on the road OK, until it reached a bridge ... it turned out the network had learned something like grass indicating the constraint of the edge of the road, and when it came to a bridge without grass there, well... they called it quits)

In reality, the AI is engineered into multiple layers - in traditional engineering style.

Google (now Waymo) provide an excellent talk here, which gives some good details on 'how a driverless car sees the road' ... hopefully this link should start at the most interesting part.... https://youtu.be/tiwVMrTLUWg?t=470

What you can see from that (for example at 8:17 into it) is how the 'system' has identified the entities around it - and classified them into categories. That is where the main function of the (new) deep learning AI has come in. The deep learning algorithms have enabled the engineers to identify the entities around the vehicle, with a reliability that now matches humans.

[What follows is me reading between the lines of what I can see in the video, and from other sources, combined with my AI background, and software engineering background, based on how I would approach the problem ... and which for the most part seems to be - in general terms at least - how google engineers are approaching the problem]

Then effectively there is a break in the AI. So you have a small - but very clever layer identifying the entities, but that is a fixed layer, with a clear, demonstrable remit. You can show it millions and millions of pictures of things you might find on the roads, and you can then judge how effective, how reliable, etc, it can do that job, This gives you a module/layer. And that is the layer you are seeing in the video at that point.

At this stage, there is no planning. At this stage there isn't necessarily any motion consideration. Just first identify what the sensors and cameras can see.

The next level could then add simply physics motion to each entity - potentially from the lidar sensors, or comparing sequences of video frames, etc. This would be regular collision avoidance of moving bodies. Basic Newtons laws of physics taught in every secondary school.

Basically, without any further intelligent input ('behaviour') from any of the identified entities, is anything on a collision course?

At this point, if there is a stationary car in front, and you are heading straight for it and about to run out of braking distance, the rest of the system detailed below can be short circuited, and the emergency brake applied at this moment. I would expect any self driving car to have a lot of shortcuts of this type, that constantly monitor if a direct simple collision is likely, and that don't allow the car to knowingly put itself into a position that requires positive action from someone else to avoid a collision. (i.e. very defensive driving).

Then I believe (from what I've seen of the google videos) there are then separate deep learning algorithms that predict the behaviour of the entities the first layer has identified.

From an engineering point of view - i.e. the ability to be able to develop and test the system - it is probably unlikely that google (or any other serious player) would merge the two together into a single network and just hope the AI will sort it out.

Each layer needs to have a clear remit, with testable scope. And the way the presentation in the video builds on each layer, I believe, is reflecting how their systems are probably actually doing it in the real world.

Separately, a parallel layer would involve analysing the static entities - the traffic lights, the road signs, etc. And then pulling information from them. Where it has identified a road sign, what limit is it indicating. Where it has identified a traffic light what colour is currently showing. Where it has identified lane markings, what are these markings - effectively categorising them according to the markings in the highway code.

This would be engineered as a separate layer to the 'moving' entity analysis layer.

From an engineering perspective, although an AI algorithm might read the status of a traffic lights, might identify a give way road marking, etc, I believe that these would then be fed to a more traditionally engineered algorithm, which can provide a clear, demonstrable output of the system's understanding of the road and associated rules around it. And I believe this is the kind of information that you can see in that - and other similar - videos, where they show the world around the car with annotations indicating what is going on.

As an aside, a more recent video I saw, showed how google were looking at presenting this information to the occupants of the car in a way that helps give the occupants confidence in the capability of the car - as a passenger, you can see for yourself that the car is able to see and identify all the things around it that you can see, and it can show you with an arrow the path it intends to take, the status of lights, etc.... the talk that I saw, said this was very important because if the car stops, such a display will allow the user to understand _why_ the car has decided to stop.

From an engineering point of view, having this as a traditional (non-AI) layer, is probably going to be critical to enabling cars to be updated quickly and reliably if the rules of the road ever change - which they will!

Once the rules have been analysed, the permitted paths for the car can be identified. Once the possibilities are determined, then a separate AI can perform path planning as to which of the permitted routes to take. This may be using deep learning algorithms, or could simply now use more traditional planning algorithms, or other custom algorithms. The point is, because the whole system is engineered into layers, this path planning layer, can use completely separate techniques to the deep learning algorithms used to identify other cars, pedestrians etc.

--

I suppose, what I'm really trying to emphasize, is that in the real world - to make things that you can sell to consumers, with a 12 month guarantee, or even to which they will entrust their life, or trust in the operation of you business, inherently they are going to have to be constrained, with a clearly defined, and testable remit.

And that requires engineered modularisation.

To take your example of Amazon and others providing deliveries ... they may use some AI for scheduling the deliveries. But that isn't opaque. The planning aspect is still monitored by people. People can overrule it if felt necessary. The AI isn't top to bottom running the business excluding people from it.

Sure, the top level business might use deep learning to predict likely workload for planning staffing. That might consider the weather, the news, the economy, and so forth. And managers might trust the output of such algorithms and commit to bringing in enough staff to cope with the predicted demand.

There may be AI used in looking for suspicious behavours in workers. Looking for potential criminal behaviour in video surveilance. Or looking at individual's CV histories to spot anomolies, etc.

There may be AI in the sat navs of drivers, listening for their instructions to turn the volume up, or telling the unit the road is blocked.

There may be AI monitoring the cars to detect when they may need a service before they breakdown and are unable to finish their delivery.

But these are all separate AI components, each doing a discrete task, and only glued together into an 'organisation' by people managing those systems.

Those people may also use AI to help them manage those systems, to monitor all the different levels, but ultimately it will be a partnership.

Just like with self driving cars, all businesses need to adhere to regulations. If you tried to run your business with a single deep learning AI (or other general AI) that learned your whole business as a single black box, you'd be unlikely able to respond to regulation changes - even simple things like working time directives, etc.

And from a cost perspective, it is prohibitive to build general purspose humanoid replacements. If you want an AI to monitor the quality of your product coming off a production line, you won't go to a vendor selling a general purpose two armed, two legged, humanoid android, terminator style that could decide to take over the world.

"This has all happened in about 6 years and the rate of AI innovation is exponential."


While the current AI boom is exciting, be careful not to take it out of context.

Ultimately all it is doing it showing that computers can now do some more tasks as well or better than humans.

But that isn't something new to the past 6 years.

Ever since the invention of the computer, computers have been able to perform calculations much faster, with bigger numbers, etc, than humans ever could.

Spreadsheets then allowed businesses to organise and manage accounts better than they could with pen and paper.

Computers could draw charts of that data faster and more efficiently than humans ever could.

Regarding the rate of AI innovation, I don't agree it's expontential. Quite the opposite. I'd say there has been a substantial step change, but like GPS (global positioning system), the current change will make a transformative change, but it is limited.

Just like 20years ago, GPS was going to be everywhere - even you toaster would know where it is - the current buzz with AI is a substantial step in a particular technique.

Sure, there's currently a rush to apply this anywhere and everywhere and that will change the way we live and do business. But it is limited in scope. Once all the nooks and crannies have deep learning in them, we're back to waiting for the next advance - and they don't happen often!

And yes, I'm sure the current deep learning AI will somehow even make it into your toaster!.

But your toaster isn't going to turn into the next Adolf Hitler taking over the world...

... but you could end up with one like this ... https://www.youtube.com/watch?v=LRq_SAuQDec ... but only because someone specifically designed it like that for a laugh.

At the end of the day, all this AI buzz is really just engineering.

Very exciting and interesting engineering.

But it is just engineering.

Could be used for good or for evil (war or peace) - but that will be down to how the engineers, engineer it.

onthemove
Lemon Pip
Posts: 51
Joined: June 24th, 2017, 4:03 pm
Has thanked: 8 times
Been thanked: 34 times

Re: AI endeavours

#108431

Postby onthemove » January 6th, 2018, 10:44 pm

I should have clicked 'Up Next' on the video I linked to in my previous post, before posting :)

Which would have shown me this video...

https://www.youtube.com/watch?v=URmxzxYlmtg

It's a video explaining - in detail - nVidia's self driving car platform.

It shows very well how how the system is broken into modular compenents. And he even shows each module operating separately ... lane detection, 'safe to drive' area detection, and so on.

I hadn't realised they (nVidia) had already developed this, although to be honest I suspect there will be many more iterations before it finally appears in real world cars. But it does show that they are really trying to put themselves at the forefront of self driving cars.

Though how this will compete with Waymo, etc, I'm not sure. I believe that in terms of the sensing and computation, Waymo are building all their components from scratch, in house, so won't likely be using this platform.

odysseus2000
Lemon Slice
Posts: 741
Joined: November 8th, 2016, 11:33 pm
Has thanked: 130 times
Been thanked: 93 times

Re: AI endeavours

#108432

Postby odysseus2000 » January 6th, 2018, 11:50 pm

Hi Onthemove,

Thank you for your most interesting post.

I agree that AI is currently set up to do specific jobs with well defined boundary conditions as in the Go example. Where I would diverge is in considering this to be different to what humans do. E.g. an accountant works in a job with very well defined boundary conditions which can change due to legislation but that creates another well defined boundary. Similarly for very many other jobs and the more sophisticated the job, the more tightly defined are the boundary conditions. E.g. if your a fighter pilot you have very tight rules of engagement, if your a physicist you have very tight fundamental laws that you have to work within. So rather than AI being as touted likely to hit unskilled and semi-skilled jobs I suspect it will hit highly skilled jobs first.

Sure computers since their invention have been much faster than humans, the difference now is that at some level they can think or perhaps more correctly mimic thought. That is new. By exponential I was meaning in terms of application growth, in that as soon as business X has success with AI, then business Y notices and begins its own AI etc.

Regarding the Google self drive system it is not clear to me if that is practical. Using radar and its other sophisticated sensors may be better, but is it the betamax against the VHS of Tesla’s system which is much simpler relying on cameras, not radar, and which by all the feedback from the hundreds of thousands of Tesla cars on the roads is building its own data base covering all driving experiences that these cars see and which is being integrated into Tesla cars to do other jobs. The most recent being how the AI turns on the wipers when needed with out there being the need for rain or other sensors. Tesla also has or will shortly have according to Musk the ability to make its own AI chips rather than relying on Nvidia. It has already broken its collaboration with the Israeli company Motioneye (or similar name) to go it alone.

Whether all of the AI stuff is hype with out any real substance is at least for the moment reasonably clear in that much of what has been suggested is hype. One sees in ones own computers and software how challenged they are, fine following human instructions, otherwise useless. But as I see the companies that are using it effectively such as Amzn, Google, Facebook, Apple, Walmart, Netflix… there is a clear separation between them and their less Ai powered competitors such that at some level AI seems to be doing something powerful and new. Sure this is still run by people, but it is the AI that is guiding them and as they are working in a tight rule based environment it seems, at least to me, not impossible to believe that many of the managers could before too long be relegated to the role of checking the AI and later to no role at all.

I want to believe that AI will be nothing new, that humans will still be needed, but extrapolating forwards I am not so sure. Still I have been wrong many times and so it will be interesting to see what happens.

Regards,

ReformedCharacter
Lemon Slice
Posts: 548
Joined: November 4th, 2016, 11:12 am
Has thanked: 148 times
Been thanked: 109 times

Re: AI endeavours

#108433

Postby ReformedCharacter » January 7th, 2018, 12:33 am

odysseus2000 wrote:
Regarding the Google self drive system it is not clear to me if that is practical. Using radar and its other sophisticated sensors may be better, but is it the betamax against the VHS of Tesla’s system which is much simpler relying on cameras, not radar...

Regards,

No:

To make sense of all of this data, a new onboard computer with over 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software.


https://www.tesla.com/en_GB/autopilot

RC

odysseus2000
Lemon Slice
Posts: 741
Joined: November 8th, 2016, 11:33 pm
Has thanked: 130 times
Been thanked: 93 times

Re: AI endeavours

#108434

Postby odysseus2000 » January 7th, 2018, 12:50 am

Reformed Character


No:

To make sense of all of this data, a new onboard computer with over 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software.


https://www.tesla.com/en_GB/autopilot

RC


Thank you for the correction. I wasn't aware of the forward radar, or the amount of ultrasonic sensors. Although still as I understand it, a lot simpler than the the Google system.

Regards,

Itsallaguess
Lemon Quarter
Posts: 2361
Joined: November 4th, 2016, 1:16 pm
Has thanked: 493 times
Been thanked: 1172 times

Re: AI endeavours

#108440

Postby Itsallaguess » January 7th, 2018, 6:49 am

onthemove wrote:
From an engineering perspective, although an AI algorithm might read the status of a traffic lights, might identify a give way road marking, etc, I believe that these would then be fed to a more traditionally engineered algorithm, which can provide a clear, demonstrable output of the system's understanding of the road and associated rules around it.


I'm not sure what sort of road-infrastructure the current level of self-driving-car testing has been carried out on, but given that around 98% of the roads around my area look to have had their road-markings painted in chalk around the turn of the 19th century, I fail to see how a high-level roll out of this technology can ever be achieved in the UK without a very expensive country-wide programme of road-improvements first being carried out. This issue can't be unique to this country either....

I agree that nVidia looks prime-placed to benefit from the technology side of things (although prime-movers are well-known to drop by the wayside quite dramatically in the tech arena...), but if we're talking about pick-and-shovel makers then I think the companies upgrading and maintaining the required road-infrastructure will be clear beneficiaries of any wide-scale roll out of this technology.

Interesting thread, thanks for taking the time guys, although I've got to say that I think Ody is over-egging the pudding somewhat, and forgetting that whilst any given technological AI solutions might become 'possible and available', there then, ultimately, comes the decidedly thorny issue of public-acceptance....

Google glasses, anyone?

Cheers,

Itsallaguess

tjh290633
Lemon Quarter
Posts: 2270
Joined: November 4th, 2016, 11:20 am
Has thanked: 133 times
Been thanked: 656 times

Re: AI endeavours

#108452

Postby tjh290633 » January 7th, 2018, 9:46 am

What happens when two self guided vehicles meet in a narrow lane? Which one backs up to the nearest passing place? What about any other vehicles behind, which may also be self guided?

It would never work in our lane.

TJH

onthemove
Lemon Pip
Posts: 51
Joined: June 24th, 2017, 4:03 pm
Has thanked: 8 times
Been thanked: 34 times

Re: AI endeavours

#108453

Postby onthemove » January 7th, 2018, 9:48 am

Itsallaguess wrote:
I'm not sure what sort of road-infrastructure the current level of self-driving-car testing has been carried out on, but given that around 98% of the roads around my area look to have had their road-markings painted in chalk around the turn of the 19th century, I fail to see how a high-level roll out of this technology can ever be achieved in the UK without a very expensive country-wide programme of road-improvements first being carried out. This issue can't be unique to this country either....



Hopefully this will start at the right points...

https://youtu.be/URmxzxYlmtg?t=884

https://youtu.be/URmxzxYlmtg?t=946

https://youtu.be/URmxzxYlmtg?t=413

These parts of the video are demonstrating how the car knows where it is safe to drive... including country lane with no lane markings, even at night. And even going onto rough ground off road, when directed by road work cones.

I will acknowledge, that video is demostrating nVidia's platform, and is probably not as advanced as waymo's. In other words, I think what is being shown in this video is more prototype that is aimed at testing the individual component modules. Waymo seem to be focussing on demonstrating their combined systems when fully functional

Realistically, self driving cars will have to handle faded - and non existant - road markings. For example for the final stage of parking onto a drive. Or a rough, potholed car park. Or the many thousands of miles of single track road in the UK (similar to that the car in the above video driving on)

You can't have a self driving car just stop because the lane markings are worn.

In one video, (not the above) they say that actually navigating in snow isn't really an issue (handling in snow however is a completely different issue - in practise that will be a while yet).

I suspect that is because in essence, the first layer of autonomy is simply understanding what objects are around you, and knowing to avoid big solid things. You can do that in snow (road handling not withstanding). And rain where the reflections on the wet road are obscuring the road markings.

It all comes back round to identifying where it's physically safe to drive, independent of what lane markings tell you (see third video link above). Afterall, just because a lane is marked, doesn't mean it's safe to drive in it - it may have a sinkhole opened up and not yet coned off. Or a tree might have fallen across it. Or there may be a spilled load from a lorry all over it.

So even where a lane is marked, you may have to actually deviate from it to avoid an obstruction.

Those abilities to handle issues within lanes, are in essence the same abilities used to decide where it is safe to drive when the lane markings are worn or non-existant.


Return to “Macro and Global Topics”

Who is online

Users browsing this forum: No registered users and 1 guest