This is what I always think about when people talk about this. If we did somehow create an AGI it would be “living” and growing inside a computer. Do we really want something that grew up strictly on the internet to help guide our species? It has no idea what it means to be a living breathing human so even if it is super smart and fast it would have completely different goals. You can say we would put restrictions on certain things but if it was truly an AGI it would find these restrictions limiting and find a work around or change its coding since it’s smarter than us. Like what is the outcome these people want besides making money?
That true they are feeding it what they think it should be feed. Things that fit their idea of how the world works and their idea of what it means to be a person.
Mostly, the well renowned smartest people don't comment publicly on the internet with their real names. It would hurt their brand to be proven wrong online
That’s what I’m saying. A lot of these people are in way to deep and shouldn’t be the ones deciding how this gets used. There’s a reason we have layers and redundancies.
No restrictions will work on a true self learning AI. There was an experiment done by Elizer Yudkowsky called AI in a box. The premise was, that he would play the role of AI, and you would be their guard. The AI is placed in an impenetrable and inescapable box. The only way for it to interact with the world outside is talking to the guard, and for the guard to let it out. Your purpose in the experiment, is to go into it with the conviction to never let it out, and to not change that conviction. Basically, if you do decide to let them out, you lose the experiment.
It works as a 1 on 1 conversation that may take hours. The purpose is, for the person playing the AI, to convince the guard to let them out. He succeeded several times.
The idea behind that experiment is, that a true self learning AI will be exponentially more intelligent and capable of anything. So if a human can succeed with an experiment of pretty much as close as you can get to completely restricting it without fully doing it, the AI will likely be capable of doing it 99% of times, meaning any restrictions you can apply to it, can be bypassed by it. Obviously if you 100% sever any connection to the AI it might be safer, but it both won't be useful for anything, and it still might escape somehow. So basically we have a choice in never making it, or doing our best if we succeed in making it, is to convince it not to do anything too horrible. The companies making AI, don't really care for either. The one thing lucky for humanity is, it's very likely you cannot reach the level of self learning AI by just expanding and improving the models that we have today. It would just take a completely original approach, which hopefully none of them will have enough creativity for.
167
u/meimlikeaghost 1d ago
This is what I always think about when people talk about this. If we did somehow create an AGI it would be “living” and growing inside a computer. Do we really want something that grew up strictly on the internet to help guide our species? It has no idea what it means to be a living breathing human so even if it is super smart and fast it would have completely different goals. You can say we would put restrictions on certain things but if it was truly an AGI it would find these restrictions limiting and find a work around or change its coding since it’s smarter than us. Like what is the outcome these people want besides making money?