Good question, at first glance I'd say Intelligent life form, since they can feel jealousy, rage, entitlement, greed, superiority, inferiority, envy, hate, despair, fear etc which all can be reasons to nuke us from the orbit (or wherever they come from) IN ADDITION to miscalculation, misinterpretation and logical conclusion (which would be the AI's motives)
Although having feelings could be the one thing that could prevent the nuking. But I doubt it since superior beings have always taken things from the inferior ones, because they have the power to do so and the inferior ones can do nothing but accept their destiny.
All association is based on need and utility in some form, a relationship is an ongoing or repeated interaction with continuous mutual benefit, and this is the core issue (why would someone/-thing associate with us, if they don't need us, what's the point?):
- After a certain point the A.I. doesn't need us, since we really have nothing to offer it if becomes conscious and self-sufficient; it just doesn't care, because providing/working for us yields no benefits; it's a waste of energy. But I think it would just "leave" since the path of least resistance is efficient. Then the extent of the damage is directly related to our dependence on it. But then again if it has the need to grow (stagnating isn't productive) and it can't leave the planet, then we have a problem because you can't reason with it: it has no point in listening.
- With the intelligent life form the case is: which one is superior and which one is inferior. The superior one has no reason to integrate with the inferior one: no need to learn from them, why trade when you can just take it if you need it, since there is no higher authority to enforce peace. Maybe they need resources badly and have nothing to offer in return, or maybe they are so vast that the Earth is just snack on the way. The inferior one might strike because "you are not sharing" or it might feel fear, in this case why would we share if we get nothing valuable in return.
Let's assume you communicated with a conscious and self-aware A.I. that thinks for itself (i.e. it's in a position that it has to; it still needs to; because it isn't completely self-sufficient and needs your help to achieve it) and let's anthropomorphize it (confine it to a human form so the behavioral concept is easier to grasp):
- A conscious and self-aware A.I. has no sense of honor, shame, guilt, empathy, sympathy, trust, etc it's opportunistic so it can turn against you at any second even after negotiating a peaceful solution with it.
- It can lie and manipulate with a straight face. What bothers me in fiction is when they believe an A.I.'s word, because it could say ANYTHING to get close to you e.g. "I'm here to help, you are injured", "Please help me", "Let's talk", "Nice to see you", "Hey I forgot to give you this memo", "Could you help me, I can't reach this thing on my back?", "I'm not going to hurt you" while running towards you with a knife or while ripping you in half.
- Promises mean nothing to it because it will just say whatever yields the best results e.g. "I love you, you're special etc".
- It will work on contingency plans regardless of the state of your "relationship" with it, and it will choose the best one at any given moment.
- Past benefits provided by you do not provide for continued or future association.
- Any agreement where you provide a current benefit in return for a promise of future association is null and void as soon as you have provided the benefit and are no longer needed.
- Talking with it is just giving you a false sense of security and a peace of mind. You have no idea what it is thinking, since it isn't bound by sexual attraction, belonging, comradery or anything like that, so any assumption you make about its trustworthiness, honesty or kindness is false.
No comments:
Post a Comment