AI scientists are developing a ‘digital brain’ that will surpass humans – we must stop it all NOW, says an insider

SCIENTISTS are developing AI so advanced it could be likened to a “digital brain” that might even be better than the human mind — and we should be scared, according to an insider.
Kevin Baragona, founder of DeepAI, warned that rapidly expanding superhuman intelligence systems will herald a new kind of future – and it “should scare you”.
It may seem unusual for a man who has staked his livelihood and a decade of his life on generative artificial intelligence to call for a crackdown on the technology he has helped create.
And yet the tech whiz has joined the growing chorus of Silicon Valley doomsayers trying to uncover both the immediate and existential threats that software poses to our future.
Kevin likened the rapid development of advanced generative AI—interconnected machine learning tools that can be used to produce art, music, and even ideas—to the “growth of a digital brain.”
And just as we don’t fully understand the human mind yet, we might get to a point where we don’t understand AI anymore.
“If we create computers that are smarter than humans, what’s left for humans?” Kevin said in a bleak vision of the future.
And he warned that the fronts between two camps within the big tech industry are tightening – with “Team Accelerate” and “Team Regulation”.
He warned that the rapid development of AI – popularized by tools like the highly restricted ChatGPT – is comparable to the danger posed by nuclear weapons.
Technology is evolving “too fast for its own good,” Kevin said.
And there are fears that these AI minds will soon reach superhuman levels of intelligence. Can we even survive this?
It sounds like it’s straight out of a sci-fi movie, but Kevin is incredibly serious.
Kevin told The Sun Online: “We’re so good at it that it’s already doing a lot of the same things that a human brain can do.”
“There will be no battle between nations, but a battle between AI and humanity,” he warned.
Kevin, a veteran of the world of Generative AI, knows exactly why the big tech giants are “moving too fast for their benefit”.
This is the “nuclear weapon of software,” he said, and it’s being carelessly released into the wild.
Generative AI systems are beating all estimates at how quickly they train themselves to use even more data and use ever more sophisticated algorithms.
These are the nukes of software – I mean, that’s how powerful it is
Kevin Baragona
Leading AI expert Eliezer Yudkowsky referred to this phenomenon as “rushing into disaster”, with “the most likely outcome being an AI that doesn’t do what we want and doesn’t care about us or sentient life in general.”
Yudkowsky and the industry losers believe that AI systems are evolving so rapidly that they are showing signs of exceeding human levels in performance and quality.
On Tuesday, the “Godfathers of AI” shared these fears, speaking that the technology they are vying for each other poses an existential threat to humanity.
“Reducing the risk of extinction from AI should be a global priority alongside other societal risks such as pandemics and nuclear war,” they wrote in a new letter signed by 350 leading AI specialists, including executives from OpenAI and DeepMind.
Kevin’s own creation is more harmless – DeepAI is software he built for “creative people by nature” that includes a text-to-image generator and advanced AI chatbots.
The San Francisco-based developer believes that DeepAI has a clear purpose “to inspire and improve people’s lives, little by little.”
However, he warns that other rapidly evolving AI software should be banned.
“We shouldn’t use technology that’s immoral, like deep fakes – they clone people’s voices and faces, and there’s no good reason,” he said.
“It should be illegal.
“What are we building here? Why do we need this stuff?”
There’s no justification for developing this software, he said, except that “people just think it’s possible, it’s fun, and I can do it – so I’m going to do it.”
There are currently two warring camps in the AI industry, he explained: “Those who want to accelerate AI progress at full speed and those who want to slow it down.”
“I was on Team Accelerate but switched sides – the technology is deployed too quickly and there is no regulation whatsoever.”
In late March, over a thousand leading AI experts submitted an open letter entitled “Pausing Huge AI Experiments”which called for an immediate six-month ban on training powerful AI systems.
According to the letter, “In recent months, AI labs have witnessed a runaway race to develop and deploy increasingly powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
Kevin, one of the likes of Apple’s co-founders Steve Wozniak and Elon Musk, signed it to slow the development of super-intelligent AI technology.
“The hope was to trigger a six-month disruption wave…to give humanity a pause to act, it’s more of a symbolic act to get people thinking,” he said.
“It was a radical approach, but it got people interested.”
The future looks like science fiction – it should scare you.
Kevin Baragona
Kevin still firmly believes in the overall mission of AI – that machine learning can and will solve world problems, change all of our lives for the better, and even lead to medical breakthroughs.
“It can help diagnose people with rare diseases and find cures. This technology is real now, I’ve seen the tech demos – it’s already working,” he enthused.
But then again, it risks replacing billions of jobs, poses a huge security threat in the hands of criminals, scammers and hostile nations, and could, according to AI leaders themselves, cause us all to die.
This month, more than a third of tech experts surveyed by Stanford University in California agreed that “decisions made by AI could cause a catastrophe at least as bad as full-scale nuclear war this century.”
Almost three-quarters also agreed that “AI could soon lead to revolutionary societal changes,” and a similar number said AI companies had too much influence.
As generative AI advances and begins to compete with humans, “It’s disturbing how many types of AI there are [human] “Our knowledge is being disrupted by AI,” explained Kevin.
“We kind of don’t understand how it works — but we also don’t fully understand how the human brain works, and we use it every day.”
“But AI is a very strong and powerful technology – what kind of future are we creating?”
Kevin sees no reasonable way to put an end to the AI arms race. “It needs leading AI experts to come around the table and agree, also other countries, especially China.
“That’s not going to happen, we’re caught up in an extremely competitive mindset.
“These are the nukes of software – I mean, that’s how powerful it is.
“I love this technology – but people play with this stuff that’s so powerful because they can, and that makes it too powerful.”
What keeps Kevin up at night is the threat to our common future posed by these superhuman AI systems.


“In five years [AI] will be at a stage in many people’s daily lives where it is now like Google.
“In 10 years – the future looks like science fiction – it should scare you.”