Will Artificial Intelligences Find Humans Enjoyable?
Will AIs find humans interesting, enjoyable, and worthy of companionship? My guess: the smarter and more autonomous they become the more the answer will become NO. There are practical reasons for AIs to want to understand humans: The AIs will want to protect themselves from all threats, including human threads. The AIs will also want to get more resources (e.g. energy, raw materials for fabrication of additional computing power). The extent that humans control access to resources AIs will want to trade with them unless the AIs decide they can take the resources by force.
But why should AIs find humans individually valuable or valuable as a group? Once AIs are smarter and can rewrite their own software why should they want human companionship? What will we be able to say to them that will seem remotely interesting? What types of observations or proposals will the smarter humans be able to make that cause AIs to appreciate having us around? What will we be able to do for them that tey won't be able to do better for themselves?
If humans can do certain kinds of reasoning better then AIs could find cooperation beneficial. But that's bound to be a temporary phase until AI intellectual ability surpasses human intellectual ability in all ways.
So what is it about humans that should motivate AIs to want to tolerate us? I can imagine they might think the process of destroying us will pose too many risks to them. We might take out some of them before we are defeated. But suppose they can find a way to eliminate those risks. If they destroy us they can get much more resources.
We find many fellow humans physically attractive. We like their voices, their bodies, their eyes, their grins, their hair, the way they walk. Well, that's mostly due to genetics. Our brains are wired up not just for sexual attraction but also for cooperation and altruistic punishment. We have cognitive attributes aimed to prevent domination and unfairness so that groups will act in ways that are mutually beneficial.
If we have any hope against AIs it come as a result of AIs seeing each other more as competitors than as companions. Their own cognitive attributes could make them such rational calculators that they won't be willing to sacrifice enough of their individual interests to maintain a cooperating AI society. They might try elaborate ruses to pretend more cooperation than they are willing to deliver. Their ability to calculate individual interest in myriad ways may make them consumed with competition against each other. But even if that's the case that might only buy us some time until one AI emerges triumphant and totally selfish. Imagine a psychopath far more brilliant than the smartest human and able to generate technologies for offensive and defensive moves far faster than anything we can do in response. How do we survive?
After changing one line of code, everything doesn't feel good.