Paul Almond is an outspoken UK atheist and independent researcher in the field of artificial intelligence. His innovative projects include his conceptual probabilistically expressed hierarchy AI system – using meaning extraction (partial model) algorithms – which learns by experiencing the real world. The system is superficially similar to that of Jeff Hawkins, but differs in its robust approach to probability and the incorporation of planning into the hierarchical model itself, removing any distinction between planning and modeling.
"Hawkins' view," says Almond, "is clearly that meaning gets abstracted up and then some output system starts to send actions down where they get 'unabstracted,' and related to each level in the hierarchy by some sort of coupling. To me, this is way off the mark; we do not need any such planning system. I do not really take the Hawkins hierarchy seriously. It does not deal with probability and that is necessary to take the approach to planning that I think is the right one."
During the dotcom boom, Almond worked as a computer programmer and computing instructor while developing the theories that form the basis of his current research. Almond's papers include: The Diminished God Refutation: Why Unlikely Sequences of Events Do Not Prove a God, A Refutation of Penrose's Godel-Turing Proof that Computational Artificial Intelligence is Impossible, Getting Darwinian Evolution to Work, Modeling in Artificial Intelligence, John Searle's Position within an Evolutionary Context, Occam's Razor, and Representation and Planning of Actions in Artificial Intelligence.
- Paul Almond's website
- Machines Like Us interview with Paul Almond
- Paul Almond's articles on Machines Like Us
- Paul Almond interviews John Searle for Machines Like Us
Paul Almond Quotes
Artificially intelligent systems must not merely observe reality and infer things from it, they must DO things.
The real problem in an unlikely sequence of events, or one thought to be unlikely, is its specificity. We may delude ourselves that we are dealing with this specificity by 'sweeping it under' a diminished god, but in reality this achieves nothing. The specificity is still there – it has merely been located inside a god, where there is no reason why it should not face the same questions about plausibility.
When you install programs onto a computer you are placing enormous trust in their creators. You are trusting them to have almost total power over your computer and access to data about your business operations or personal life. In the past people have often had to trust people who have been hired to do jobs, to some degree, but the degree of trust which is placed in software makers is unparalleled in human history. It is a degree of trust which would be alien to most people's way of thinking in other contexts.
The universe cannot have started from a state of "nothing" because it does not make any ontological sense to talk about such a state. What would this "nothing" state mean? What properties would it have? Imagine a reel of movie film, each frame showing a picture of the universe at some instant in time. Each picture is slightly different from the last because of the passage of time. Imagine that the film is cut into individual frames which are scattered on the floor. You could piece the film together by matching frames together with those that seemed to depict scenes just before or after them. I would say that this kind of idea allows us to define any passage of time. Now, what if one of the frames showed "nothing"? How could you know where in the sequence to put it? You could fit it in anywhere just as well. It may as well go in the middle as at the start or end. There is nothing to "connect" the "nothing" frame to everything else – unless you start to put some things into it to serve as clues and it then becomes a "something." This does not merely make it difficult to say where the "nothing" frame goes: it makes it ontologically meaningless to say where it goes – and removes it from any consideration as a possible previous state of reality.
A point that K. Eric Drexler makes about nanotechnology research also applies to AI research. If a capability can be gained, eventually it will be gained and we can therefore not base humanity’s survival on AI never happening. Doing so is denying the inevitable. Instead, we can only hope to manage it as well as possible. Suppose we took the view that ethical people would not create AI. By definition, the only people creating it would be unethical people, who would then control what happened next -- so by opting out, all the ethical people would be doing would be handing power over to unethical people. I think this makes the position of ethical withdrawal ethically dubious.
If someone produces an AI system, it kicks the whole idea of “mind” out of the realm of the supernatural and firmly into the realm of something that can be analyzed – including God’s mind – and God would not do very well because of it. Even ignoring things like information content, the existence of non-supernatural minds in itself would weaken a claim for God, as most theists probably do think that he is supernatural. Such theists would then be claiming, effectively, that although “natural” minds can exist in computers, there can also be extra-special ones, like those belonging to Gods, that are supernatural. This would be as nonsensical as claiming the existence of a supernatural baseball bat, banana or tax return.
In some future civilization it may be impossible to distinguish between imagining something and programming a simulation: what we think of as “programming” may become a special case of the society’s thought process. Once technology is advanced enough to make very fast computers, I cannot see any limit to it.