in ,

Artificial Intelligence Could Get Out Of Control

In theory, AI has the ability to continually reprogram and upgrade itself with no interference from humans

In theory, AI has the ability to continually reprogram and upgrade itself with no interference from humans until it becomes more intelligent than human beings.  And experts warn that, when that happens, AI could see humanity as nothing more than a hindrance and will think nothing of pushing us to one side and replacing us at the top.

Exaggeration?

Maybe not.

Even Elon Musk, considered one of the leading authorities on AI, believes that AI will eventually outthink humans and other experts say that researchers could lose control over AI, comparing it to when the nuclear arms race spiraled out of control in the 1930s.

Back then, Western scientists started work on something they thought would take decades to complete – the nuclear bomb. When WWII broke out, both sides began a race to be the first to develop nuclear arms and the USA effectively signed the war’s death warrant when it dropped two nuclear bombs, one of Hiroshima and one on Nagasaki.

And, according to the author of End Times, Bryan Walsh, we are seeing much the same happening now with AI. He believes:

“The big issues going forward with AI is that we get more powers and the ability to do things that we couldn’t do before. We get ahead of ourselves and we struggle to control that. We saw that with the development of nuclear arms. Now we are seeing the same. The technology itself is advancing very fast but our ability to control it is lagging behind. Science pushes forward and often the attempt is to figure out what you can do, rather than just should you do it or what is going to happen if you do it.”

“Just ask scientists in the 1930s if a nuclear bomb could be created, they thought it would take decades. And just ten years later you have Hiroshima. These technologies can advance even faster than the practitioners realize and there is no real control system.”

The late Stephen Hawking also warned us that while the creation of effective AI could be the biggest and best thing in history, it could also be the very worst.

There is no real way of knowing if AI will help us, ignore us or sideline and destroy us. Unless we are fully prepared for the risks, it could very well be the worst event in civilization – it is bringing us danger, in the form of autonomous weapons and it could potentially provide a way for leaders to oppress their citizens.

In short, it could be the biggest disruption the economy has ever seen or it could be the best thing to ever happen to us.

One things for sure, if we don’t start preparing for it now, it could well the worst-case scenario.

What do you think? Will AI get out of control or are we worrying over nothing

“It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

Written by Ann Reynolds

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

Alphabet everyday robot

Google Parent Company, Alphabet, Steps Back Into Robotics

Shwarzenegger

Arnold Schwarzenegger Deepfake Hits the Net