The number of times I have to explain this to people is scary. They genuinely think that things like Siri or Alexa are "AI". Hell, even Google's computer Go player or IBM's Deep Blue / Watson aren't AI. It just doesn't exist at the moment. Don't even think about trusting that "AI" car to drive you anywhere on an ordinary public road.
"AI" as we actually have it is really just a sufficiently complex system hiding behind rules laid down by the creators (humans). Though the rules can change, and we can try to let the system form its own rules (e.g. by instructing it how to record experiences and then use them as a reference later, and then exposing it to things we want it to learn), neither of those options operate in the absence of human instruction to any degree of satisfaction, and they are being hand-held and "programmed" at every stage, even when we feed them data in the hope the machine will start to recognise what a cat looks like (and recent studies show that adding 5% random noise to an image can make a picture it recognises as a cat without the noise be recognised as just about anything with the noise).
"AI" systems are always limited by how we've instructed them to do just that. So far, all "AI" has proven is that you can't train Siri to learn your voice, or Google to recognise a child-safe image, completely reliably no matter how much data you throw at it.
As such, almost all modern systems are - to paraphrase Arthur C Clarke- sufficiently advanced technology that's nearly indistinguishable from magic. Sure, they do what we want. Sometimes. But they never get it right completely, and they have inherent limitations at which they give up and can do no more. Additionally, you can train a chess computer to beat a grandmaster, but it can't recognise a cat in an image in the same program. Or if it can, it can't formulate new mathematics too. Thus all of "AI" as you see advertised to yourself is really just complex algorithms, mostly written or at minimum heavily-guided by humans, and often the less the specific instruction, the less reliable they actually work out in real life (a computer to turn on your washing machine at 8:00pm will generally work, a computer that tries to guess when you'll be home will tend to get it spectacularly wrong over time).
Beyond that, the implication that - somehow - we can create an AI that works like a human runs into a lot of problems. Sure, neural networks are fun to tinker with - and incredibly limited. They are also based on a severely limited model of thinking.
But to me, this article (https://backchannel.com/the-myth-of-a-superhuman-ai-59282b686c62) picks up on all the pertinent points. The assumption that we can make something smarter, in every area we ever train and test it, and which can do so in time and resource constraints within the real world (we've already hit physical limits on processor speed, for instance) is just a nonsense at present. Everything sold under the moniker "AI" is just snake-oil being sold to you. Alexa isn't AI. Nor is Siri. It's easy to baffle either of them despite fields of computers sitting on their backend to recognise and answer your queries. They may be useful tools, but they are certainly not "intelligent".
The tiny slivers of silicon that actually do the work in, say, Google, if you were to compress them together? Probably wouldn't come to the size of a cardboard box. The supporting equipment, however, is pulling megawatts of power and sited in global rooms the size equivalent of a small city. This is why the brain is amazing - the compactness, efficiency and minute scale, not the speed or how many books it's scanned in.
And yet we still can't match "AI" against any task that's not rigorously designed, tested and tweaked. Image recognition, for instance - working in schools, I guarantee you that no web filter, or combination of multiple web filters, can stop people accidentally being exposed to inappropriate images. And it's not even like there are huge professional companies out there TRYING to show their inappropriate content specifically to children, so it's mostly just incidental stuff that slips through. But no amount of verification, even with human assistance, gets it right. Sure, it can make cool toys that apply filters to your photos, give you a set of ears, and let you "move" a virtual avatar, but it's not AI. It's just some very clever statistics code running as high speed, for the most part.
I think the reason for this is related to one element mentioned in the above article: The Turing Machine. This was actually created to solve another problem written about by the same man. It's called The Halting Problem. It basically says that you can never write a computer program - or mathematically-rigorous algorithm, which is the same thing - which can reliably determine, for ANY program given to it, whether or not that program will ever stop. You could write one that might be able to tell that for certain classes of program it analyses, but you can never make a "generic" program analyser that can analyse any program and tell you if it stops.
If you were to feed such a program analyser INTO the program analyser, would it be able to tell you if the program analyser ever finished analysing? Maths says no.
Turing, et al, proved the bare mathematical case that it's impossible for it to do so, that such a program cannot exist. In doing so, he boiled down all computing - before much of it even existed - to a theoretical minimum machine which is mathematically equivalent to your PC. If your PC can do it, so can his machine. If his machine can do it, so can your PC (resource limits and the speed of execution aside, but that's covered above). Similarly, ANY "Turing-complete" machine can simulate any other Turing-complete machine. Modern PC's and processors are still strictly Turing-complete (or worse!). They cannot do anything that any other Turing-complete machine cannot do.
However, there are things - such as The Halting Problem - that neither your PC, acres of datacentres or a theoretical Turing machine on paper could ever solve. It's far from a mathematical proof, but I'm tempted to conflate these items together - that all PCs are a machine equivalent to a Turing machine, that all they can do is limited to what any other Turing machine could do, and that there are quite-simply-stated problems that cannot be solved by Turing machines.
If you put those together, there's nothing whatsoever to suggest that a computer (no matter how powerful or advanced) can ever do everything we do. And there's also nothing to suggest that a human brain is in any way limited to being "merely" Turing-complete - Turing-complete is the minimum requirement, but we may well exceed that by ALSO doing things that no Turing-complete machine can do.
If we are more than Turing-complete, AI on any current computer architecture could never work the way we do, as we would be able to do things it couldn't. Would a human - or sufficient numbers of humans, even an infinite number of humans, if we're allowed to join them all together like we do machines - be able to solve the halting problem generally? Nobody could answer absolutely but my intuition would say: Yes. Given that we were able to think up the halting-problem, and analyse solutions to it, and use it to prove a mathematical certainty, as an attempt to "hit the limits" of the programs we know we can make, I think there's something more at play there. I think that something is over-and-above Turing-completeness.
And if that's true, that would distinguish us as operating in a context not accessible to a Turing-complete machine.
There is, though, a ray of light in the complete unprovability of the human case. Humans have also proven incapable of proving some other things too. The very question of whether we *are* able to do the above, and also the existence of problems such as the incompleteness theorem - the logical consequences of which are to prove that we can never know if mathematics is "correct" ("complete") - have answers that we cannot provide in the limitation of our thinking. Is that a limitation of the human mind trapped inside a complex, but still Turing-capable, system of thinking?
But the only thing that's for sure is that we don't actually have AI. It doesn't really matter what self-driving car manufacturer's or supercomputer builders claim. They have performed extraordinary acts that reach far beyond what a human could do. But they do it by brute-force and instruction, for the most part, even if that instruction is the details of "how to learn". As The Matrix wisely noticed: "their strength and their speed are still based in a world that is built on rules. Because of that, they will never be as strong or as fast as you can be." Computers are still computers and, despite appearances, can only ever do exactly as they were always instructed to do for the input they receive.
If we exclude actual hardware failure, and your computer crashes, or something unexpected happens, or it "experiences a problem", that's because it was instructed to do just that. By some human at Microsoft or Apple, possibly, but it's merely following instructions. Even if they build a "learning computer", and that goes wrong? It's because it could only EVER have gone wrong, given the input that it did and the instructions it was forced to act upon. Which is a scary concept when you think that there are cars on the road modifying the steering on the basis of what Tesla tells it to do and what its cameras see. Sure, if they got it right, it will work well enough.
But the problem is that you can't know, and you can't pretend to understand the program. Either someone has written sufficiently complex variations of "Apply brake if pixel X is green" (heuristics), or you have a free-running program that nobody can understand, modify, limit or direct, and could act randomly at any point (i.e. interpret the paper bag as a child and veer into oncoming traffic to avoid it). And in actuality, on anything that runs on silicon, the first is also true even when the second is true. The problem is that if the rules don't cover everything, the computer isn't really doing anything more then "making up" it's own new rules to cope based on... the same rules that didn't cover everything!
Such systems are limited by the resources available to them, but there's nothing yet to suggest that even without those limits they could go on and learn everything, or even sufficient amounts, to actually operate as we would like them, let alone operate as we do ourselves.
In the same way that a hammer beats your fist for knocking in nails, computers and algorithms certainly advance us, and allow us to do things we couldn't otherwise. But the first step is always making the tool. The wood and metal does not extract itself and grow on little hammer trees. And still today, we're making the tools to our own rigorous instruction, even if they go on to perform feats that we couldn't ourselves. And a hammer is no good for unscrewing a cabinet. You need another tool for that. And the tools that can do it all tend to have limitations that mean they aren't very good at any of their jobs.
This is where I see the current state of "AI". Hitting a roadblock, and compensating by pressing the throttle harder and using more power to smash through it. Not realising that we'd only have to steer down another path to get past, or that there may be no physical way through anyway.
Any mention of "AI" attracts derision from myself, because it's not what people imagine it to be, or what people have been selling it as. And I'm not sure that it could be. "Ever" is a long word for a mathematician like myself, but at the very least it can't be that way "anytime soon".
But I look at image recognition, speech recognition and anything that cannot be laid down by written laws in limited scopes (e.g. the rules of chess can be described in a page of A4. Though beating a grandmaster is rather more tricky than reading a sheet of A4, there are very limited actions that can be taken and yet can still generate extraordinarily complex logics). In those systems, I do not see intelligence, or learning. I see heuristics and rules. Complex, maybe. Useful, almost certainly. But reliable in all situations? No. Learning? No. Self-guided? No.
Maybe "quantum computers", with their esoteric rules and total disregard for standard Newtonian physics can break through there. But, for sure, it's not going to be any time soon, or even on your iPhone 12. And we still think that quantum computers are Turing-complete too (http://epubs.siam.org/doi/abs/10.1137/S0097539796300921), at the least ones we could build, understand and control. Let's hope that, like I suspect humans may be, they are MORE than just Turing-complete.
Meanwhile, I still can't get any personal digital assistant technology to recognise my voice properly to make "Navigate Home" or even "Play Bohemian Rhapsody" work reliably enough. However, people still keep telling me that such things 'learn', and will take over the planet.
Not anytime soon.