Friday, April 6, 2018

Towards a Technological Co-singularity


Machine learning isn't Skynet. It just isn't. There is a massive difference between training a learning algorithm to drive a car down the road and building a general purpose artificial intelligence. AI will never happen, not because we can't do it, but because no human being would be insane enough to do it, or at least, no group of human beings. This assertion, which appears so unfounded on it face, is actually a well supported conclusion. Let me explain.

To bring about a general superintelligence you need to mimic evolution. All gains in the field have been arrived at through a process that directly mimics evolutionary training. No one really knows how an algorithm learns to recognize dogs in photographs, or faces, or whatever. They build a bot that randomly constructs algorithms. Then they test these thousands of randomly generated algorithms to see which one is most effective at recognizing cats in pictures, driving a car, or whatever. Then they throw out the poor performing ones and "mutate" the high performing algorithms by letting the bot randomly change portions of code. Then the repeat the process over, and over, and over, and over, and over again. Poor performing algorithms are deleted, (killed), and better ones are selected for the next round. The whole process mimics mutation followed by selection.

There is a really simple way to develop a successful machine artificial intelligence: just build a self-replicating robot and let it out in the wild. Since all the gains of AI research have come from evolutionary approaches, it is also the only realistic way to create it. No simulated environment will ever be complex enough to fully reproduce evolutionary processes. For machine intelligence to actually be developed, you would have to literally construct some robot animal that constructs replicas of itself using the minerals found in dirt, or whatever, and then release it into, say, the Amazon rainforest. You just give it a simple command: reproduce. A few million generations later you come back and find that some sort of millipede made of silicon has evolved. Literally, the evolutionary process creates it. Like the learning algorithm for speech recognition, its training is all based in survival. Thousands of generations of the bot are killed by rust, animals, lack of minerals, water, etc., until one day you come across a machine that can defend itself against attack, is waterproof, seeks out the minerals to construct duplicates, etc.

And you would have to be insane to build such a machine, and even if you did it would not be a superintelligence but a silicon version of an animal.

It would also be something relatively easy to exterminate. Contrary to Hollywood movies, such a living organism would relatively easy to locate and kill. An army of thousands, (or even millions) of men might be required, but armed with metal detectors they could exterminate the whole species. A computer virus might also be needed, but human beings drive to extinction other organisms all the time, and a deliberate attempt to kill it would undoubtedly work. It may be silicon (or germanium or indium arsenide), but remember, it mutates no faster than any other organism, and like any animal it has to die to evolve.

To actually create a machine superintelligence you would have to bring this machine animal into existence and then relentlessly upgrade it — on purpose — to make it smarter than humans. First you would have to force evolve it to human-level intelligence, and then beyond. It would take a team of thousands of researchers, billions of dollars, and would draw a backlash from all the foreign governments of the world, the press, and the public.

In the meantime you already have a intelligence lying around that is ready to be relentlessly upgraded to superintelligent status: humans.

You see it turns out the ethics of eugenics and the ethics of building and AI are pretty much identical. Since you can only arrive at a superintelligent agent by (a), having or bringing a self-aware intelligence into existence, and (b), upgrading it with forced evolutionary processes until it surpasses you, you are practicing eugenics on a self-aware machine intelligence. You have to "murder" self-aware bots to get superintelligent bots, or you have to relentlessly upgrade their code in a process that is virtually identical to gene therapy, transferring "good code" from "healthy machines" to "unhealthy/sick machines." You either have to gas the defective bots Hitler-style, practice the bot version of embryo screening, or transfer code. In the end you are just practicing machine eugenics.

And so this brings us to the final point of this essay, which is: why would you do any of this in machines when you already have human beings to experiment on? And why compound your ethical issues? Going the machine route is both less efficient and more unethical. And why would capitalism spend money trying to force the evolution of machine intelligence in the wild when it could just upgrade humans? Why would capitalism waste the money? The only way to get a consciousness that mimics self-awareness is to mimic the evolutionary selection forces that produced human beings. Why do that when you can upgrade the human beings? It's cheaper.

And don't get it in your head that some traffic system that manages all of the cars in LA or something will somehow gain self-awareness. It won't. Without the evolutionary forces to train self-awareness even a city-scale AI that manages the traffic of millions of cars is still just an animal by human standards, albeit a very large animal. And yes, it might kill 3 drivers one day in a homicidal rage. Everyone will freak out and say, "the singularity is here! The machines have finally risen up against us!" The NTSB will investigate only to find the completely boring explanation that it killed those 3 drives to save 20. After all, it is programmed to "reduce net traffic fatalities," and those drives are an especially irksome group responsible for dozens of crashes, and the machine made a completely rational calculation that getting rid of them would reduce traffic deaths by blah blah... and so forth.

And don't give me crap about a machine upgrading itself. That is even less likely than a human upgrading himself. Since all gains come from culling defective algorithms, the logic of techno singularity rapidly converges with the logic and ethics of eugenics. Superintelligence without an evolutionary force crafting it is just a machine that likes to daydream. The notion that one can relentlessly upgrade ones way to superintelligence falsely assumes that an AI can exceed humanity without running countless experiments. The machine needs real life feedback.

Anything that can be done in machines can be done better in humans with genetics.

And so the co-singularity is what happens when human beings use AI to develop gene therapies, the gene therapies make humans smarter, who build smarter machines, who make smarter human beings. The process is recursive, and occurs within the human species rather than without. Rather than bother with the wasteful process of building a whole new machine biosubstrate, the co-singularity builds itself in tandem with the organic as a series of genetic improvements and augments.

To summarize;

1. The only successful AI development methods are evolution-mimicking methods.
2. The evolution of a general problem solving AI would require "natural release" of that AI into the wild to allow it to learn in nature, which would never be tolerated by governments.
3. The ethics of forced machine evolution are identical to the ethics of eugenics.
4. The first general AI machines would be animals, who would need to be upgraded to human-level intelligence before upgrading them to beyond human-level intelligence.
5. The process of building an entirely new machine biosubstrate would be hugely wasteful.
6. Capitalism would prefer to upgrade the existing carbon biological substrate instead, because it is more cost effective.

I will give one more argument: that (7). technological development follows the path of (greatest iteration) ÷ (least R&D cost), and that this rule favors genetics over a machine substrate.

First we had wax cylinders, then vinyl records, cassette tapes, CDs, DVDs, and finally digital music. Why did musical formats take a detour though magnetic mediums? Why not just stick to discs?

Because technology follows the path of greatest iteration.

Companies want to maximize profit and minimize R&D costs. This means building on what you know, and putting out a new version of an old product every year, even if the new version is worse, (hence migration from vinyl to cassette). Firms want to produce continual upgrades in order to maximize profits. They prefer relentless iteration over radical technological disruption. This translates really well to a genetic business model, and poorly to an AI business model.

It is fairly easy to compare entire genomes with all studies for human traits, and then mine correlations for profitable gene therapies. There already are companies assembling massive databases to do just that, like 23AndMe.

So how would one actually arrive at artificial general intelligence? Without selection effects the dread of a machine that upgrades its own intelligence repeatedly will probably never happen. Think about, how does one get from x intelligence level to x + n intelligence level without feedback? Intelligence must be discovered. It does not just materialize out of nothing.

Imagine there are millions of personal robots put into circulation performing labor for humans beings as domestic servants, workers, or whatever. Any skill learned by one of these bots is automatically transmitted to all the others once a day when it powers down to recharge. The machine does not so much as evolve as learn, and maybe, through a method like this it could evolve self-awareness. Maybe the repeated need to perform in social interactions could cause it to develop a sense of self, but I find the idea of a computer connected to the internet teaching itself about the world, and upgrading itself relentlessly to be ludicrous. It has no prior understanding of anything, no physical reality of what things feel like, smell like, etc. Humans aren't just brains, we are physical creatures, and without a physical knowledge of the world, without going through some evolutionary process that connects the mind to the body, a machine can never really understand anything. The magical self-upgrading machine is a fairy tale. Things have meaning because to us of our evolutionary background, because of emotions, hormones, feelings, gut bacteria, an a lot of other things. A brain in a vat, silicon or carbon, is a mind without meaning. This mode of existence may appeal to a philosopher, but it will never produce something that can really "understand" anything. "Self" cannot be separated from evolution, machines will never be allowed to evolve, and no self can occur without evolution. Capitalism will route through humans via genetics rather than around them through AI. Screen cap this.



No comments:

Post a Comment