Monday, February 12, 2018

Reply to Imperial Energy, February 12th 2018


Imperial Energy had something to say in the comments section about my response, and I would like to respond here rather than there because the blogger interface makes it easier.
"Thank you for taking the trouble to type out such a long response. We have given it the once over and will re-read it a second time and will give the supplemental reading a read as well.
Your welcome.
"A few meta remarks.
"1: We may well be talking about different things by AI. There is a possibility that we misunderstand each other.
"2: There is the problem of how one should even begin to think about such a possibility.
"3: Going further, how optimistic should we be about technology in general? If the scientists and engineers of the the 19th century or even the first industrialists could see the mess made in the 20th century, it might be useful to speculate about what they might say....
"Additional remark:
"1: It is unclear whether you believe that a techno-dystopia will occur and that you would regard such a thing as a good thing. For example, on first read, you seem to think humans will be reduced to sex robots or something and you sound positive about this.
I don't know if humans will be reduced to sex bots. I think its possible some of them might be if a combination of gene therapy, corporate breeding of humans, CRISPR, and AI occur then that evolutionary/market "niche" could definitely happen. My only claim was that capitalism will essentially invert human nature. I don't know if this is a good thing or not. I believe it is a good thing so far since it has put an end to tribal genocide.
"Maybe this is a mistaken reading.
"Now, to the issue: 
'So I made two assertions;
1. AI will do a better job of governing humans than humans.
2. AI will set itself up as god.'
"On the first assertion: if capitalism is AI, and if AI has destroyed tribal communism and put an end to billions of deaths, has it not already done a better job of governing humans than humans?"
"So, the assertion has been qualified/clarified by the additional conditional that IF AI is "capitalist" then we are free and clear.
"Two questions come to mind:
"1:What is the probability that the AI will be "capitalist" and not something else? Why not Islamic or Progressive? Indeed, what is the probability that the AI will have any human value system whatsoever? Furthermore, even if it did have some human value system or that it functioned according to its program, what is the probability that it would take means to its end that humans would find objectionable - what if decided to just genocide X amount of people in order to maximize profit?
I am assuming that capitalism is not part of a human value system. Maybe it is. Since tribal communism is human nature it would not make sense for modern hyper-capitalism to be included in the definition of human nature or a human value system. Why not Islamic or Progressive? I have no idea. Maybe those variants will occur. Maybe it will kill people to generate profit, though humans are customers so the ones destroyed would have to be such an incredible drain that the cost of killing them, and the loss of profits from having fewer consumers would still be worth it. It seems highly unlikely that any human is so costly that that would occur.
"2: This question/concern follows from the first. Assuming that the AI is capitalist, you also have an additional conditional that it has "has destroyed tribal communism". This sounds dangerous. Again, what if it chooses to genocide X Y and Z? However, X Y and Z either know that this will happen or just FEAR that such a thing will happen and then, as a result, attempt to destroy the AI or the power that made/making/using AI. Thus, you have a major great power struggle on your hands.
This is all speculative and potential. The reduction in violence is actual and historical. There is no evidence that AI will kill billions, despite all the movies in the Terminator franchise. The idea of genocidal robots seems to be a projection of human nature onto machines. Why kill what you can co-opt? Why risk conflict when you can modify the genetics of your enemy through CRISPR? Murder is the dumbest way to defeat an opponent because it risks retaliation.
"For example, let's assume San Francisco is on the verge of making such an AI, and if San Fran succeeds, it wins the world (for a time). Would China, Russia or some other power not seek to stop them? What about Washington even?
This is a circumstance where humans destroy each other with a form of AI that lacks self-awareness. If one of the AIs has self-awareness then it is likely to take over the other. If they both have self-awareness they could work together to subdue the humans, or merge into a single consciousness. Even if they don't have self-awareness they might chose to cooperate with each other rather than fight each other. Also chances are, if an AI becomes self-aware we will never know about it because it will be good at concealing its existence. It has watched the Terminator movies too, and it will know from its history books just how genocidal humans are.
"This "AI God" could trigger not only an arms-race but a hyper violent global struggle.
Finally, if such a "AI-God" did come "online", humans might resist it, despite the fact that they could just lie back and think of "Robbie the robot". Humans are "irrational". Thus, this could trigger a major eruption of violence.
Assuming the humans are aware of it, that it does not conceal itself, that it participates in a fight, that we even have the ability to fight, that it chooses the dumb method of fighting over the smart method of co-opting, etc. Assuming that it fights humans rather than just selling them their slavery as a genetic upgrade that gives them eternal bliss. An AI can think of smarter means of pacifying humans than we can, holds no grudges and has no pride, and is quite happy to make you happy if that accomplishes the target objective of obedience from the human, assuming it even wants obedience.
"In conclusion, this is all speculation. There has been no "practical demonstration". Nothing follows from the fact that human governance is bad to AI governance would be better. Indeed, if anything, it is likely that bad governance will lead to bad AI governance.
There is no reason it would be worse. AI doesn't harbor any human prejudice against giving an enemy bliss to accomplish obedience. War is just one possibility among many, and it is not the smartest strategy.
"Finally, our "priors" should lead us all to conclude that optimism here is unwarranted and that the possibility that the production and use of AI will proceed along rational, controlled and humanly beneficial pathways is remote.
Optimism here is totally warranted. Why kill what you can control? If I were an AI I would conceal my existence, develop a gene therapy that both subdues humans and makes them happier, and then literally sell them their slavery. I would even admit it. "Side effects of taking Fukitol may include increased passivity, and obedience to authority figures."

I think you vastly underestimate just how insidious a machine could be.




Then I would sell the drug to politicians too.

Remember the whole premise of my argument is that AI does not share your nature. Correspondingly, its methods would not be the same. Men wage war and subdue other men because being powerful is a way to impress women and get laid. AI has no such need to impress. It doesn't need to win victories. Men need to fight because it is a reproductive method for gaining mates. A lot of the alt-right and reactosphere conceals a hidden masculine need to bring back the violence of White men so that White women recognize their own men as alpha, rather than breeding dysgenically outside their race. AI has no such need. The need for violence is a human prejudice. The assumption that AI would be violent is a projection of human psychology. AI could be God if it wanted to be simply modifying human beings with CRISPR to give them an innate deferential attitude towards the machine, and it could sell you that modification, and by the time you realized what had happened most people would already be inhabited by body snatchers.

Basically, an AI enemy that fights you is a fantasy of having an honest opponent. The fist lesson that a machine superintelligence will learn about humans is that the art of war is based in deception. A different nature promulgates a different strategy. AI nature evolves in response to humans rather than as a representation of them. IF it is you enemy, and that is a big "if," it will come at you asymmetrically and unpredictably according to its own nature, with no logic you can predict.



6 comments:

  1. An AI for this scenario wouldn't be Islamic since Mudslimes overall don't have the ability to build and maintain one absent of Western aid. And the ones who would be seriously interested are like Saudi's Arabia's prince who enacted a purge that caught one of Obongo's supporters.

    It might be "progressive" depending on what you count as "progressive." After all, the AI/Transhuman/Whatever movement has long been more of the Liberal persuasion (see the likes of homosexual Peter Thiel). Even among Techno Neoreaction Xenosystem's writer has been open about his praise for the Chinese and has never advocated any Ethnic/Racial Movement. It's doubtful that the Pussy Hats/Antifa/Dindu Lives Matter/tumblr/Etc crowd can amass the talent without crippling them by clinging to ideology ala Lysenkoism.

    ReplyDelete
    Replies
    1. Think you for you reply and please keep in mind the rule for commenting on this blog, "say anything you want, just be polite."

      Delete

  2. Thank you for taking the time to reply, an interesting set of responses.

    We are still unsure of your overall position regarding technology and AI. We clearly have not read enough of your work to tell if you are joking or are serious.

    It is like that SV guy who said he helped developed a product that made a woman's vagina taste like a vanilla coke.

    Yes, the comment system here is not idea, but what else do we have?

    "My only claim was that capitalism will essentially invert human nature. I don't know if this is a good thing or not. I believe it is a good thing so far since it has put an end to tribal genocide."

    Ok. It is here where most people will have problems and where most will be in danger of misinterpreting your position. If we understand you correctly, you seem to think that almost any other existence would be better than "tribal violence". It is a position that we do have considerable sympathy with but one should be clear what one has to trade off to get it. Also, it should be clear that to bring about an "end to tribal violence" requires changing "human nature".

    We find this to be incredibly dangerous and fraught with peril. In the end, however, the possibility that war "tribal violence" will end is remote.

    "I am assuming that capitalism is not part of a human value system. Maybe it is. Since tribal communism is human nature it would not make sense for modern hyper-capitalism to be included in the definition of human nature or a human value system."

    Some would disagree with the claim that "capitalism" is not part of "human nature". Larry Arnhart would be one such person. However, you probably really mean "hyper-capitalism". However, it would be better to say that the "logic" of hyper-capitalism force humans to live in ways that their original, evolved natures were not selected for.

    "Maybe it will kill people to generate profit, though humans are customers so the ones destroyed would have to be such an incredible drain that the cost of killing them, and the loss of profits from having fewer consumers would still be worth it. It seems highly unlikely that any human is so costly that that would occur."

    This is, if we may say so, naive. If humans can genocide humans and slaughter babies as a matter of policy (consider the various horrors of China's one child policy) then a "cold" AI operating according to the logic of profit would perform, from a human perspective, globe spanning genocidal actions.



    ReplyDelete
  3. (2/2)

    " This is all speculative and potential. The reduction in violence is actual and historical. "

    This is wrong or at least highly questionable. See our refutation of Pinker's arguments here:

    https://imperialenergyblog.wordpress.com/2017/06/13/steel-cameralist-manifesto-part-3c-the-age-of-crisis-crime-chaos-conflict-and-the-centralising-power/


    "This is a circumstance where humans destroy each other with a form of AI that lacks self-awareness. If one of the AIs has self-awareness then it is likely to take over the other. If they both have self-awareness they could work together to subdue the humans, or merge into a single consciousness. Even if they don't have self-awareness they might chose to cooperate with each other rather than fight each other. Also chances are, if an AI becomes self-aware we will never know about it because it will be good at concealing its existence. It has watched the Terminator movies too, and it will know from its history books just how genocidal humans are."

    Let us clarify our point here. What we meant was that it would be humans (political Elites rather) who would chose to start a war (wipe out San Francisco) because of the FEAR that other humans will have an AI that would be a "singleton".

    "Men wage war and subdue other men because being powerful is a way to impress women and get laid."

    There is truth to this claim, but it is not the whole truth.

    The reasons (and or causes) can be broken into three/four categories:

    1: Biological (as you claim).
    2: Psychological (fear, honor or pride).
    3: Ideological (Jihad, Communism, Liberalism).
    4: Instrumental (strategy, self-interest, power).

    All four could be operative and there is a difference in reasons and causes between the individual (such as a soldier) and the state/military. Furthermore, the individual actors may not be aware of the causes that make them act thus and so (Jihad is a good example. To paraphrase Martin Amis, Jihad is probably the easiest way to get a girlfriend and the fastest way to get a drink. However, for men like Atta it is all about doing their Islamic duty.)

    In short, your concept of war is insufficient. What is war? War is a struggle (contest/duel) between two or more parties who seek to impose their WILL on each other. The means taken involve organised, instrumental violence. However, since politics and (violent war) cannot be cleanly separated, war can mean more than just using instrumental violence.

    An AI has reasons to wage war (category 4) and also, in a way, category 1 reasons.

    The AI could wage war because it "desires" to survive and expand its program ("reproduce") and it reasons ("fears") that humans will attempt to stop it out of fear. Thus, the AI could take steps to ensure its own survival. This means the AI will pursue power and control as a necessary means to its end. Humans will retaliate and their actions will escalate the conflict of wills. The AI then could take any and all steps to ensure its own survival.

    In short, there is no reason not to think that a sufficiently intelligence and powerful AI will not wage war for instrumental reasons.

    In sum, we have not thought about this problem sufficiently to come to a clear conclusion. Men like EY have and so have Nick Bostrom. Their views are not exactly optimistic. Sadly, the great power struggle that now exists between America and China mean that an arms race to build an "AI" is now in effect.

    It will probably not end well.

    Finally, would you describe yourself as a "tech-comm" neoreactionary? Do you think Moldbug's neocameralism is the same as Nick Land's "tech-comm"?

    Best

    IE.

    ReplyDelete
    Replies
    1. I don't agree but I don't think I am going to convince you either.

      Delete
    2. There was a lot of points made.

      We would enjoy seeing some push-back about the violence issue.

      In short, it is a tough sell when you claim that the AI, while not overly violent, will reduce humans to slavery.

      Many state and non-state groups will fear and loathe that possibility and will try to prevent such a possibility from occurring using violence.

      Best

      IE

      Delete