Imperial Energy had something to say in the comments section about my response, and I would like to respond here rather than there because the blogger interface makes it easier.
"Thank you for taking the trouble to type out such a long response. We have given it the once over and will re-read it a second time and will give the supplemental reading a read as well.Your welcome.
"A few meta remarks.
"1: We may well be talking about different things by AI. There is a possibility that we misunderstand each other.
"2: There is the problem of how one should even begin to think about such a possibility.
"3: Going further, how optimistic should we be about technology in general? If the scientists and engineers of the the 19th century or even the first industrialists could see the mess made in the 20th century, it might be useful to speculate about what they might say....
"1: It is unclear whether you believe that a techno-dystopia will occur and that you would regard such a thing as a good thing. For example, on first read, you seem to think humans will be reduced to sex robots or something and you sound positive about this.I don't know if humans will be reduced to sex bots. I think its possible some of them might be if a combination of gene therapy, corporate breeding of humans, CRISPR, and AI occur then that evolutionary/market "niche" could definitely happen. My only claim was that capitalism will essentially invert human nature. I don't know if this is a good thing or not. I believe it is a good thing so far since it has put an end to tribal genocide.
"Maybe this is a mistaken reading.
"Now, to the issue:
'So I made two assertions;
1. AI will do a better job of governing humans than humans.
2. AI will set itself up as god.'
"On the first assertion: if capitalism is AI, and if AI has destroyed tribal communism and put an end to billions of deaths, has it not already done a better job of governing humans than humans?"
"So, the assertion has been qualified/clarified by the additional conditional that IF AI is "capitalist" then we are free and clear.
"Two questions come to mind:
"1:What is the probability that the AI will be "capitalist" and not something else? Why not Islamic or Progressive? Indeed, what is the probability that the AI will have any human value system whatsoever? Furthermore, even if it did have some human value system or that it functioned according to its program, what is the probability that it would take means to its end that humans would find objectionable - what if decided to just genocide X amount of people in order to maximize profit?I am assuming that capitalism is not part of a human value system. Maybe it is. Since tribal communism is human nature it would not make sense for modern hyper-capitalism to be included in the definition of human nature or a human value system. Why not Islamic or Progressive? I have no idea. Maybe those variants will occur. Maybe it will kill people to generate profit, though humans are customers so the ones destroyed would have to be such an incredible drain that the cost of killing them, and the loss of profits from having fewer consumers would still be worth it. It seems highly unlikely that any human is so costly that that would occur.
"2: This question/concern follows from the first. Assuming that the AI is capitalist, you also have an additional conditional that it has "has destroyed tribal communism". This sounds dangerous. Again, what if it chooses to genocide X Y and Z? However, X Y and Z either know that this will happen or just FEAR that such a thing will happen and then, as a result, attempt to destroy the AI or the power that made/making/using AI. Thus, you have a major great power struggle on your hands.This is all speculative and potential. The reduction in violence is actual and historical. There is no evidence that AI will kill billions, despite all the movies in the Terminator franchise. The idea of genocidal robots seems to be a projection of human nature onto machines. Why kill what you can co-opt? Why risk conflict when you can modify the genetics of your enemy through CRISPR? Murder is the dumbest way to defeat an opponent because it risks retaliation.
"For example, let's assume San Francisco is on the verge of making such an AI, and if San Fran succeeds, it wins the world (for a time). Would China, Russia or some other power not seek to stop them? What about Washington even?This is a circumstance where humans destroy each other with a form of AI that lacks self-awareness. If one of the AIs has self-awareness then it is likely to take over the other. If they both have self-awareness they could work together to subdue the humans, or merge into a single consciousness. Even if they don't have self-awareness they might chose to cooperate with each other rather than fight each other. Also chances are, if an AI becomes self-aware we will never know about it because it will be good at concealing its existence. It has watched the Terminator movies too, and it will know from its history books just how genocidal humans are.
"This "AI God" could trigger not only an arms-race but a hyper violent global struggle.Assuming the humans are aware of it, that it does not conceal itself, that it participates in a fight, that we even have the ability to fight, that it chooses the dumb method of fighting over the smart method of co-opting, etc. Assuming that it fights humans rather than just selling them their slavery as a genetic upgrade that gives them eternal bliss. An AI can think of smarter means of pacifying humans than we can, holds no grudges and has no pride, and is quite happy to make you happy if that accomplishes the target objective of obedience from the human, assuming it even wants obedience.
Finally, if such a "AI-God" did come "online", humans might resist it, despite the fact that they could just lie back and think of "Robbie the robot". Humans are "irrational". Thus, this could trigger a major eruption of violence.
"In conclusion, this is all speculation. There has been no "practical demonstration". Nothing follows from the fact that human governance is bad to AI governance would be better. Indeed, if anything, it is likely that bad governance will lead to bad AI governance.There is no reason it would be worse. AI doesn't harbor any human prejudice against giving an enemy bliss to accomplish obedience. War is just one possibility among many, and it is not the smartest strategy.
"Finally, our "priors" should lead us all to conclude that optimism here is unwarranted and that the possibility that the production and use of AI will proceed along rational, controlled and humanly beneficial pathways is remote.Optimism here is totally warranted. Why kill what you can control? If I were an AI I would conceal my existence, develop a gene therapy that both subdues humans and makes them happier, and then literally sell them their slavery. I would even admit it. "Side effects of taking Fukitol may include increased passivity, and obedience to authority figures."
I think you vastly underestimate just how insidious a machine could be.
Then I would sell the drug to politicians too.
Remember the whole premise of my argument is that AI does not share your nature. Correspondingly, its methods would not be the same. Men wage war and subdue other men because being powerful is a way to impress women and get laid. AI has no such need to impress. It doesn't need to win victories. Men need to fight because it is a reproductive method for gaining mates. A lot of the alt-right and reactosphere conceals a hidden masculine need to bring back the violence of White men so that White women recognize their own men as alpha, rather than breeding dysgenically outside their race. AI has no such need. The need for violence is a human prejudice. The assumption that AI would be violent is a projection of human psychology. AI could be God if it wanted to be simply modifying human beings with CRISPR to give them an innate deferential attitude towards the machine, and it could sell you that modification, and by the time you realized what had happened most people would already be inhabited by body snatchers.
Basically, an AI enemy that fights you is a fantasy of having an honest opponent. The fist lesson that a machine superintelligence will learn about humans is that the art of war is based in deception. A different nature promulgates a different strategy. AI nature evolves in response to humans rather than as a representation of them. IF it is you enemy, and that is a big "if," it will come at you asymmetrically and unpredictably according to its own nature, with no logic you can predict.