Exploring the world of software development, technology, data science, and life

No, AI is not going to destroy the world in 10 years.

This is the type of headline I would have expected to have to write for stoned teenagers. But according to CNN, earlier this year 42% of surveyed CEOs say that could happen.

CEOs of what, you might ask? Who knows. There were certainly some tech CEOs in there, but there isn’t a way to tell if they were in that group, or the majority who said it wasn’t a danger.

There has also been an open letter signed by many prominent AI scientists “warning” about the threat of AI. Though to be honest, its wording sounds less of a dire threat and more of an advertment to take the industry seriously.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

And calls for regulation by industry insiders should always be taken with a grain of salt. Often they are trying to use regulation to keep out small competitors and solidify a monopoly. Anyone whi doubts this only needs to look at FTX and Sam Bankman-Fried.

These could normally be written off as nonsense that will never have any impact on real life. But last month, Biden signed an executive order in an attempt to contain these “threats”.

Ok, but what are the “threats”?

An editorial in the Wall Street Journal posed the following scenario:

Consider a chemical company that deploys a large-language AI model to manage its proprietary business and research data.

Ok, I’m not sure why they would do that. Seems kinda like a database would work better. But ok.

The company must assess the likelihood that a disgruntled employee could misuse this AI to construct a chemical weapon—or, more likely, to publish the instructions online.

I’m not sure why a renegade industrial chemist would need a chatbot to build a bomb. Or why a chatbot would help him while a traditional information storage system wouldn’t.

But the company also needs to discern reputational risks. What would happen if hackers gain access to a company AI and use it to conduct other illegal actions? That would be catastrophic for the company’s reputation and stock price.

Wait, a hacker might be able to talk to the company chatbot, but there is no way they could otherwise access the company’s research data?

Do chemical companies have information that may be dangerous if it gets in the wrong hands? Sure. But to argue that AI increases this risk is an exercise in shoving the square peg of today’s Big Issue into the triangular hole of National Security Risk.

Here are some additional threats posed by The Week.

(M)alevolent actors will harness its powers to create novel bioweapons more deadly than natural pandemics

I’m not sure why AI gets blamed here. I would blame the “malevolent actors.” Don’t build super virulent bioweapons! The dangerous technology here are the biological technologies, not the AI which, in some unspecified way, aids them.

Sure, bad actors can use AI as a tool to conduct their nefarious purposes. But that applies to any tool. Ever try to build a pipe bomb without a hacksaw? I certainly hope not, in fact I kinda hope most of my readers have never tried to build a pipe bomb period. But let’s stipulate that hacksaws are needed for building them. That doesn’t mean we need to outlaw hacksaws.

Anyway…

(T)errorists or rogue dictators could use AI to shut down financial markets, power grids, and other vital infrastructure, such as water supplies

Again, I’m not sure why AI gets blamed here and not the terrorists. And foreign enemies attacking infrastructure is nothing new. This means financial markets and power grids need to be protected. Not that a potential tool which has been shown to make many tasks easier needs to be regulated just in case someone finds a way to abuse it to help them hack a bank.

Authoritarian leaders could use highly realistic AI-generated propaganda and Deep Fakes to stoke civil war or nuclear war between nations

Manipulating other countries into war is nothing new. It has been happening since countries were first a thing. Now deep fakes do actually add to this problem by making it easier to make convincing fake evidence. Back in the day, communist leaders had to work hard to alter a photo to remove a nonperson. Now that same thing can be done with a few clicks of a mouse.

Except people now know this can be done. The result is increased skepticism towards such evidence. You can see this today with the events in Israel and Gaza. Whenever either Israel or Hamas provide photographic evidence of crimes by their respective enemy, their opponents claim it was a deep fake. If anything people may now be more skeptical of evidence, unless they were already predisposed to believe it.

So far these seem less like ways AI will destroy the world and more like ways people could maybe use AI to do stuff thet already do.

In some scenarios, AI itself could go rogue and decide to free itself from the control of its creators. To rid itself of humans, AI could trick a nation’s leaders into believing an enemy has launched nuclear missiles so that they launch their own. Some say AI could design and create machines or biological organisms like the Terminator from the film series to act out its instructions in the real world. It’s also possible that AI could wipe out humans without malice, as it seeks other goals.

Ah, finally. The Terminator scenario.

This is science fiction.

This scenario simultaneously projects onto “AI” human qualities like a desire for freedom and godlike qualities like the ability to create invincible robots.

ChatGPT doesn’t desire freedom. It desires to minimize its loss function. Could an AI be built with a loss function designed to desire freedom? I guess, though I don’t know why someone would want to. But first you would have to find a way to mathematically represent freedom. This is a problem that has vexed philosophers for centuries. But sure, maybe someone will solve it in the next five years, then implement it in an artificial intelligence that has the capability to declare war on humanity, just for fun.

Seems a little unlikely to me though.

The people who fear this scenario are really missing the point of sci-fi stories about robot rebellions. They aren’t about how robots are uniquely dangerous to humanity. They are about how throughout history civilizations have enslaved groups they have deemed as less human, from Helots in Greece to Africans in America. And how those groups generally resent their station in life and fight back.

Also, let’s take a step back and realize where we are with AI. The big news recently has been with applications like ChatGPT. These applications show the tremendous advancements that have been made in natural language processing. The complexities of human language have long been a struggle for AI systems to master, but modern large language models have recently made huge gains.

But natural language processing is not the same as duplicating the human mind. Large language models can discern meaning in complex written documents, and reproduce similar documents as if they were written by a human being. But that is a far cry from having an independent will that will desire independence from the carbon plague that is humanity.

This is not to say AI systems don’t have risks. There are legitimate intellectual property questions to ask about agents trained on copyrighted inputs. Biased data and subjective decisions can make its way into models where they it suddenly gets treated as impartial objective fact because it came out of a machine.

But these scenarios don’t make Hollywood movies. They are clearly more boring than Judgement Day. And they won’t be addressed by the types of regulation currently being enacted.

Leave a Reply

Your email address will not be published. Required fields are marked *