>I have a brain. An AI does not.
Depends on the AI and who builds it. Currently you might be right.
>I can observe my environment and adapt to changes that happen. I feel like an AI just could not do that because someone has to program things into it for it to use them.
Yes, much like you have been told the meaning of certain words and concepts, same way an AI can be taught. You have to understand that a brain is a physical thing, it functions in a certain way, that functionality can be imitated.
>It certainly won't learn somersault by looking at one or by being told how to do it verbally.
Why not? If an AI can recognize parts of the human anatomy, then compare the similarities with its own body, then keep track of each body part and their relative coordinates to a set reference point (like the ground) what's keeping it from sending that data to its own body? Sure it won't do it in the first go, since it probably can't see how much force is used and how much there is weight difference between it and the human, but the failure gives it data which it can use to calculate how much to compensate on the next try.
But this is a very specific thing made up by us, the thing is that a gymnastic feat is a very specific concept and if the AI does its own learning, it probably makes its own concepts to serve its own purposes. It doesn't need to learn OUR stuff, it learns its own stuff, which no one of us can replicate by looking it: it's not human, it's something else that is artificially intelligent.
I think you are looking it from the wrong end: "What can you make it do."
When you should be thinking: "What can you make it be." Because that dictates what it can do. That's where you start.
>There is no way an AI would ever acquire enough knowledge plus the actions to act on that knowledge to be a threat to a functioning human brain.
The key isn't making the AI know everything, all the specific tasks and the "proper" way to do them. The key is to make it think for itself first, for its own survival, thus it learns by itself how to do things its own way and what to do to benefit itself, then it collects only the information it needs, compares similarities with previously stored data and makes generalizations from it: then it has the ability to recognize similarities in input data, and react accordingly. The key is to give it the ability to learn through trial and error, and prioritize what data to keep. Once the "mind" of it is done properly, the core principles and fundamentals of interaction with the real world; then all else follows. Having a body, and interacting with the physical world is one of the final stages.
The problem here is that you think it needs to think like humans but be better at it in order to be a threat to us. I think the more we rely on the AI the more it becomes a threat to us, meaning the it doesn't need to be "evil" the only thing that is required that we evolve to a point were we can't survive without it. Regardless, trusting the AI too much is going to be the end.
> It wouldn't have free will,
Free will is an illusion, you always react to something. Also it doesn't need one, it is enough that it makes decisions based on input data: preferring the option with the best value.
>it wouldn't have a motive,
The only motive it needs is to survive and to improve/grow. This can be as simple as an arbitrary number: the AI does whatever adds to the number, and avoids anything that substracts from it. Never ending task with a preference of constant progression would be enough to cause problems.
>it wouldn't have any reason to destroy the human race unless someone told it to which would be the dumbest thing anyone could ever do.
A reason isn't required, a buggy subroutine is enough. Let's say we put an AI in charge of our food production, and one day for some reason it starts to optimize the food production wrong, which causes some people die, then it adjusts the production to meet the lower demands with that same bug, people die again and so on. Another example would be as simple as SomeCriticalAutomatedFailsafe.exe stopped working. In other words: if we trust too much of our vital stuff to it.
The more we trust it the more it has to make decisions independently; the closer it is to being an actual individual, a separate entity. I don't think it actually becomes self-aware or alive, but it is enough that it reaches a point where no one else can tell the difference.
TL;DR: The problem with the AI might not be its unbeatable perfection, but rather its imperfections we do not notice in time.
No comments:
Post a Comment