Publisher's Note: This post appears here courtesy of the The Daily Wire. The author of this post is Hank Berrien.
Elon Musk revealed he will create a third option to challenge Microsoft and Google's artificial intelligence platforms, and warned that the technology could cause "civilizational destruction"
without proper guardrails in an interview with Tucker Carlson.
Microsoft, which has invested billions in OpenAI and its ChatGPT platform, and Google, are both racing ahead in the development of AI. Last month Musk - who co-founded OpenAI in 2015 before stepping down from the company's board in 2018 - registered a firm named X.AI Corp after he and other AI experts and industry executives had called for a six-month pause before creating systems more powerful than OpenAI's GPT-4.
"I will create a third option, although it's starting very late in the game, of course,"
Musk told the Fox News host. "It hopefully does more good than harm. The intention with OpenAI was obviously to do good, but it's not clear whether it's actually doing good."
One early signal that OpenAI's ChatGPT could go off the rails, Musk said, is that it has already been shown to be biased. Numerous "tests"
of it have shown it heaps praise on Democrats while being scornful of Republicans and conservative ideology.
"I'm worried about the fact that it's being trained to be politically correct, which is simply another way of being untruthful, saying untruthful things,"
Musk said. "That's certainly a path to AI dystopia, to train AI to be deceptive."
"What is happening is that they're training the AI to lie,"
Musk said his version of AI won't be infected with partisan politics and will aid in mankind's quest for objective truth and understanding.
"I'm going to start something that which I call 'TruthGPT,' or a maximum truth-seeking AI that tries to understand the nature of the universe,"
he said. "And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe."
Futurists have for decades warned that AI will quickly surpass the capabilities of humans and will inevitably put its own survival ahead of mankind. Recently, Musk joined hundreds of top tech and science experts who signed a letter calling for a six-month pause in AI development, warning that recent advances pose "profound risks to society and humanity."
"What happens when something vastly smarter than the smartest person come along in silicon form?"
Musk told Carlson, adding that government regulation is necessary to prevent disaster. "It has the potential of civilizational destruction."
Humankind now has perhaps its last chance to put controls in place, Musk said.
"Regulations are really only put into effect after something terrible has happened,"
he said. "If that's the case for AI, and we only put regulations into effect after something terrible has happened, it may be too late to actually put the regulations in place; the AI may be in control at that point."
Musk offered a simple example of how a deceptive, or at least biased, AI platform could wreak havoc on mankind.
"If you have a super-intelligent AI that is capable of writing incredibly well in a way that is very influential, convincing, and it is constantly trying to figure out what is more convincing to people over time, and then enters social media, for example ... and potentially manipulates public opinion in a way that's very bad, how would we even know?"
Google co-founder Larry Page doesn't seem to understand the grave risks posed by AI, as the search giant hurtles ahead in the development of the technology, Musk said.
"The reason open AI exists at all is that Larry Page and I used to be close friends,"
Musk said. "I would talk to him late in the night about AI safety. My perception was that Larry was not taking AI safety seriously enough.
Page wants digital intelligence, "basically a digital God, if you will," as soon as possible,"
Musk said. The fellow billionaires had a falling out over their disagreements about AI, which Musk believes can be positive, but only with controls in place.
"At one point I said we've got to make sure humanity is okay here, and then he called me a 'specist,'"
Musk recalled. "And there were witnesses, I wasn't the only one there when he called me a specist, and so I was like, 'Okay, that's it. Yes, I'm a specist. You got me. What are you?' Busted."
Musk said that at the time that Google had a near-monopoly on AI technology and talent, and decided it was not in safe hands with Page. That prompted him in 2015 to invest $100 million in OpenAI, which he envisioned as a nonprofit that would be open-source to the public. But in 2019, OpenAI became a for-profit organization, with Microsoft as a major investor. It is believed to be worth nearly $30 billion.
"I'm still confused as to how a non-profit to which I donated
$100M somehow became a $30B market cap for-profit,"
Musk tweeted last month. "If this is legal, why doesn't everyone do it?"