Sunday, 30 April 2017

There is no AI

The number of times I have to explain this to people is scary. They genuinely think that things like Siri or Alexa are "AI". Hell, even Google's computer Go player or IBM's Deep Blue / Watson aren't AI. It just doesn't exist at the moment. Don't even think about trusting that "AI" car to drive you anywhere on an ordinary public road. "AI" as we actually have it is really just a sufficiently complex system hiding behind rules laid down by the creators (humans). Though the rules can change, and we can try to let the system form its own rules (e.g. by instructing it how to record experiences and then use them as a reference later, and then exposing it to things we want it to learn), neither of those options operate in the absence of human instruction to any degree of satisfaction, and they are being hand-held and "programmed" at every stage, even when we feed them data in the hope the machine will start to recognise what a cat looks like (and recent studies show that adding 5% random noise to an image can make a picture it recognises as a cat without the noise be recognised as just about anything with the noise). "AI" systems are always limited by how we've instructed them to do just that. So far, all "AI" has proven is that you can't train Siri to learn your voice, or Google to recognise a child-safe image, completely reliably no matter how much data you throw at it. As such, almost all modern systems are - to paraphrase Arthur C Clarke- sufficiently advanced technology that's nearly indistinguishable from magic. Sure, they do what we want. Sometimes. But they never get it right completely, and they have inherent limitations at which they give up and can do no more. Additionally, you can train a chess computer to beat a grandmaster, but it can't recognise a cat in an image in the same program. Or if it can, it can't formulate new mathematics too. Thus all of "AI" as you see advertised to yourself is really just complex algorithms, mostly written or at minimum heavily-guided by humans, and often the less the specific instruction, the less reliable they actually work out in real life (a computer to turn on your washing machine at 8:00pm will generally work, a computer that tries to guess when you'll be home will tend to get it spectacularly wrong over time).

Beyond that, the implication that - somehow - we can create an AI that works like a human runs into a lot of problems. Sure, neural networks are fun to tinker with - and incredibly limited. They are also based on a severely limited model of thinking.

But to me, this article ( picks up on all the pertinent points. The assumption that we can make something smarter, in every area we ever train and test it, and which can do so in time and resource constraints within the real world (we've already hit physical limits on processor speed, for instance) is just a nonsense at present. Everything sold under the moniker "AI" is just snake-oil being sold to you. Alexa isn't AI. Nor is Siri. It's easy to baffle either of them despite fields of computers sitting on their backend to recognise and answer your queries. They may be useful tools, but they are certainly not "intelligent".

The tiny slivers of silicon that actually do the work in, say, Google, if you were to compress them together? Probably wouldn't come to the size of a cardboard box. The supporting equipment, however, is pulling megawatts of power and sited in global rooms the size equivalent of a small city. This is why the brain is amazing - the compactness, efficiency and minute scale, not the speed or how many books it's scanned in.

And yet we still can't match "AI" against any task that's not rigorously designed, tested and tweaked. Image recognition, for instance - working in schools, I guarantee you that no web filter, or combination of multiple web filters, can stop people accidentally being exposed to inappropriate images. And it's not even like there are huge professional companies out there TRYING to show their inappropriate content specifically to children, so it's mostly just incidental stuff that slips through. But no amount of verification, even with human assistance, gets it right. Sure, it can make cool toys that apply filters to your photos, give you a set of ears, and let you "move" a virtual avatar, but it's not AI. It's just some very clever statistics code running as high speed, for the most part.

I think the reason for this is related to one element mentioned in the above article: The Turing Machine. This was actually created to solve another problem written about by the same man. It's called The Halting Problem. It basically says that you can never write a computer program - or mathematically-rigorous algorithm, which is the same thing - which can reliably determine, for ANY program given to it, whether or not that program will ever stop. You could write one that might be able to tell that for certain classes of program it analyses, but you can never make a "generic" program analyser that can analyse any program and tell you if it stops. If you were to feed such a program analyser INTO the program analyser, would it be able to tell you if the program analyser ever finished analysing? Maths says no. Turing, et al, proved the bare mathematical case that it's impossible for it to do so, that such a program cannot exist. In doing so, he boiled down all computing - before much of it even existed - to a theoretical minimum machine which is mathematically equivalent to your PC. If your PC can do it, so can his machine. If his machine can do it, so can your PC (resource limits and the speed of execution aside, but that's covered above). Similarly, ANY "Turing-complete" machine can simulate any other Turing-complete machine. Modern PC's and processors are still strictly Turing-complete (or worse!). They cannot do anything that any other Turing-complete machine cannot do. However, there are things - such as The Halting Problem - that neither your PC, acres of datacentres or a theoretical Turing machine on paper could ever solve. It's far from a mathematical proof, but I'm tempted to conflate these items together - that all PCs are a machine equivalent to a Turing machine, that all they can do is limited to what any other Turing machine could do, and that there are quite-simply-stated problems that cannot be solved by Turing machines. If you put those together, there's nothing whatsoever to suggest that a computer (no matter how powerful or advanced) can ever do everything we do. And there's also nothing to suggest that a human brain is in any way limited to being "merely" Turing-complete - Turing-complete is the minimum requirement, but we may well exceed that by ALSO doing things that no Turing-complete machine can do. If we are more than Turing-complete, AI on any current computer architecture could never work the way we do, as we would be able to do things it couldn't. Would a human - or sufficient numbers of humans, even an infinite number of humans, if we're allowed to join them all together like we do machines - be able to solve the halting problem generally? Nobody could answer absolutely but my intuition would say: Yes. Given that we were able to think up the halting-problem, and analyse solutions to it, and use it to prove a mathematical certainty, as an attempt to "hit the limits" of the programs we know we can make, I think there's something more at play there. I think that something is over-and-above Turing-completeness. And if that's true, that would distinguish us as operating in a context not accessible to a Turing-complete machine. There is, though, a ray of light in the complete unprovability of the human case. Humans have also proven incapable of proving some other things too. The very question of whether we *are* able to do the above, and also the existence of problems such as the incompleteness theorem - the logical consequences of which are to prove that we can never know if mathematics is "correct" ("complete") - have answers that we cannot provide in the limitation of our thinking. Is that a limitation of the human mind trapped inside a complex, but still Turing-capable, system of thinking? But the only thing that's for sure is that we don't actually have AI. It doesn't really matter what self-driving car manufacturer's or supercomputer builders claim. They have performed extraordinary acts that reach far beyond what a human could do. But they do it by brute-force and instruction, for the most part, even if that instruction is the details of "how to learn". As The Matrix wisely noticed: "their strength and their speed are still based in a world that is built on rules. Because of that, they will never be as strong or as fast as you can be." Computers are still computers and, despite appearances, can only ever do exactly as they were always instructed to do for the input they receive. If we exclude actual hardware failure, and your computer crashes, or something unexpected happens, or it "experiences a problem", that's because it was instructed to do just that. By some human at Microsoft or Apple, possibly, but it's merely following instructions. Even if they build a "learning computer", and that goes wrong? It's because it could only EVER have gone wrong, given the input that it did and the instructions it was forced to act upon. Which is a scary concept when you think that there are cars on the road modifying the steering on the basis of what Tesla tells it to do and what its cameras see. Sure, if they got it right, it will work well enough.

But the problem is that you can't know, and you can't pretend to understand the program. Either someone has written sufficiently complex variations of "Apply brake if pixel X is green" (heuristics), or you have a free-running program that nobody can understand, modify, limit or direct, and could act randomly at any point (i.e. interpret the paper bag as a child and veer into oncoming traffic to avoid it). And in actuality, on anything that runs on silicon, the first is also true even when the second is true. The problem is that if the rules don't cover everything, the computer isn't really doing anything more then "making up" it's own new rules to cope based on... the same rules that didn't cover everything! Such systems are limited by the resources available to them, but there's nothing yet to suggest that even without those limits they could go on and learn everything, or even sufficient amounts, to actually operate as we would like them, let alone operate as we do ourselves. In the same way that a hammer beats your fist for knocking in nails, computers and algorithms certainly advance us, and allow us to do things we couldn't otherwise. But the first step is always making the tool. The wood and metal does not extract itself and grow on little hammer trees. And still today, we're making the tools to our own rigorous instruction, even if they go on to perform feats that we couldn't ourselves. And a hammer is no good for unscrewing a cabinet. You need another tool for that. And the tools that can do it all tend to have limitations that mean they aren't very good at any of their jobs. This is where I see the current state of "AI". Hitting a roadblock, and compensating by pressing the throttle harder and using more power to smash through it. Not realising that we'd only have to steer down another path to get past, or that there may be no physical way through anyway. Any mention of "AI" attracts derision from myself, because it's not what people imagine it to be, or what people have been selling it as. And I'm not sure that it could be. "Ever" is a long word for a mathematician like myself, but at the very least it can't be that way "anytime soon". But I look at image recognition, speech recognition and anything that cannot be laid down by written laws in limited scopes (e.g. the rules of chess can be described in a page of A4. Though beating a grandmaster is rather more tricky than reading a sheet of A4, there are very limited actions that can be taken and yet can still generate extraordinarily complex logics). In those systems, I do not see intelligence, or learning. I see heuristics and rules. Complex, maybe. Useful, almost certainly. But reliable in all situations? No. Learning? No. Self-guided? No. Maybe "quantum computers", with their esoteric rules and total disregard for standard Newtonian physics can break through there. But, for sure, it's not going to be any time soon, or even on your iPhone 12. And we still think that quantum computers are Turing-complete too (, at the least ones we could build, understand and control. Let's hope that, like I suspect humans may be, they are MORE than just Turing-complete.

Meanwhile, I still can't get any personal digital assistant technology to recognise my voice properly to make "Navigate Home" or even "Play Bohemian Rhapsody" work reliably enough. However, people still keep telling me that such things 'learn', and will take over the planet. Not anytime soon.

Thursday, 26 January 2017

Dear Mr Trump (and others),

In case you weren't aware, worldwide there are countries with education systems whose purpose is to advance the knowledge available to their children, so that the next generation can be better than ours and not repeat our mistakes, or those of our predecessors.

  • We teach them History, so that pupils can learn about the World Wars, the history of Nazism, how it starts, what it leads to, and how people get caught up in it.  We can show them how history repeats itself, and how even the ancient civilisations went through rough periods from which they recovered from war, famine, despots and terrorism.  We don't need to teach them that some of what happened was despicable, as it's self-evident even to a child, or that it shouldn't be repeated.
  • We teach them Languages, so they can communicate with foreign peoples to build ties and understand each other, absorb knowledge from other cultures and appreciate the differences.
  • We teach them Geography, so that pupils can learn how the Earth and its systems work, how it changes over time, including what we're doing to it and what the effects might be.
  • We teach them Biology, so that pupils can understand their own bodies and the effects of some of the choices available to them, how we differ, and what modern medicine and care can do for them.
  • We teach them Mathematics, so that they can estimate, measure, and accurately record numerical information and statistics and interpret them logically.
  • We teach them Literature, so they can read and comprehend written records, media and studies of subjects that they may not be able to find a teacher for, so they can further their own education, read other's opinions, or gain a fresh perspective through the eyes of another person.
  • We teach them Religious Studies, so they can question, debate and understand religious differences and other's approaches to life, and express their own beliefs in an atmosphere of acceptance and tolerance.
  • We teach them Personal and Social Studies so they can understand how their world relates to that of others, and form co-operation and strength of individual character without having to denigrate others, and understand their social responsibilities.
  • We teach them Computer Studies, so they are able to command devices to research topics on their own, tap into media streams otherwise unavailable to them, and communicate with their friends across the world.
  • We teach them Psychology and Sociology, so they can have an informed knowledge of how they and others react and perceive situations, so that they can bring people together and see through a tissue of lies, and work towards brighter futures.
  • We teach them Media Studies and Journalism, so they can understand the methods, techniques and constraints of what a news story can tell them, and how they can interpret it.
  • We teach them Art and Music, Drama and Dance, so they can see the beauty of the world and share their thoughts, feelings and imagination with others, evoke emotions in themselves that may not be expressible, and work together to see how the whole is greater than the sum of its parts.
  • We teach them Photography and Videography, so they can understand how a different perspective can change the outlook or focus of a person, place or object.
  • We teach them Economics, so they can understand their contributions to their country and the world, and learn how to use the available resources to achieve the best possible outcome, with the least long-term damage.
  • We teach them Sports, so they can learn to play and compete, work as a team and win, work hard and still lose, and accept defeat or disappointment or opposition gracefully and professionally.

We teach them all these things, and much more, but what we appear to have omitted, in the idyllic environment that is a childhood in full time education, is that people exist who are setting out to achieve the exact opposite (whether by accident or design) and how to handle them.

I'm afraid, Sir, that you appear to be working your hardest trying to destroy these noble aims, of millions of children and millions of teachers and parents, across the globe, with rumour, false promise, misinformation, ignorance, denial, hatred, divisiveness, bile, bitterness and carelessness.

I do not see why some of the people of your country tolerate you or support you in these aims.  While they are quite free to do so, it is beyond my understanding and education to justify their reasoning for such.

However, I do not support you.  And though your actions are out of my control, and happening on the other side of the world to myself, they do still affect me, my friends and other people that I deal with every day.  Your intolerance and ignorance is contagious, and unbecoming of someone in your position.  I expect better of you, and of your country.

You are doing yourself, and your country, an injustice to proceed along the directions in which you are currently heading. There is no shame in ignorance, so long as it is acknowledged by those that are ignorant, but you are an embarrassment to the education system of your time, to your own intellect, and those around you - whether they realise it or not - by their acceptance of your deeds and words.

I hope that if future generations are unable to erase your mark upon the world, that they would at least learn to forgive you, to avoid repeating your mistakes, and to hold you as a model of what happens when all of the education above fails to reach its intended target.