Around the time J. Robert Oppenheimer learned that Hiroshima had been struck (alongside everyone else in the world) he began to have profound regrets about his role in the creation of that bomb. At one point when meeting President Truman Oppenheimer wept and expressed that regret. Truman called him a crybaby and said he never wanted to see him again. And Christopher Nolan is hoping that when Silicon Valley audiences of his film Oppenheimer (out June 21) see his interpretation of all those events they’ll see something of themselves there too.
After a screening of Oppenheimer at the Whitby Hotel yesterday Christopher Nolan joined a panel of scientists and Kai Bird, one of the authors of the book Oppenheimer is based on to talk about the film, American Prometheus. The audience was filled mostly with scientists, who chuckled at jokes about the egos of physicists in the film, but there were a few reporters, including myself, there too.
We listened to all too brief debates on the success of nuclear deterrence and Dr. Thom Mason, the current director of Los Alamos, talked about how many current lab employees had cameos in the film because so much of it was shot nearby. But towards the end of the conversation the moderator, Chuck Todd of Meet the Press, asked Nolan what he hoped Silicon Valley might learn from the film. “I think what I would want them to take away is the concept of accountability,” he told Todd.
“Applied to AI? That’s a terrifying possibility. Terrifying.”
He then clarified, “When you innovate through technology, you have to make sure there is accountability.” He was referring to a wide variety of technological innovations that have been embraced by Silicon Valley, while those same companies have refused to acknowledge the harm they’ve repeatedly engendered. “The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does.”
He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”
While Nolan didn’t refer to any specific company it isn’t hard to know what he’s talking about. Companies like Google, Meta and even Netflix are heavily dependent on algorithms to acquire and maintain audiences and often there are unforeseen and frequently heinous outcomes to that reliance. Probably the most notable and truly awful being Meta’s contribution to genocide in Myanmar.
“At least is serves as a cautionary tale.”
While an apology tour is virtually guaranteed now days after a company’s algorithm does something terrible the algorithms remain. Threads even just launched with an exclusively algorithmic feed. Occasionally companies might give you a tool, as Facebook did, to turn it off, but these black box algorithms remain, with very little discussion of all the potential bad outcomes and plenty of discussion of the good ones.
“When I talk to the leading researchers in the field of AI they literally refer to this right now as their Oppenheimer moment,” Nolan said. “They’re looking to his story to say what are the responsibilities for scientists developing new technologies that may have unintended consequences.”
“Do you think Silicon Valley is thinking that right now?” Todd asked him.
“They say that they do,” Nolan replied. “And that’s,” he chuckled, “that’s helpful. That at least it’s in the conversation. And I hope that thought process will continue. I’m not saying Oppenheimer’s story offers any easy answers to these questions. But at least it serves a cautionary tale.”
First time I see the AI threat addressed in a rational way, and not the singularity bs. Can’t wait to see the movie this week !
He is spot on.
Algorithms and AI aren’t even any different. AI is literally a complex system of nonlinear functions. It’s not black magic.
If I wrote a traditional nonlinear alto with computer optimized parameters it only differs from ML models in that it’s less complex. Not understanding your product is not a defense.
The problem is we have relied on self-training neural network models which are a black box to us.
The networks are numbers. Tons and tons of numbers. Weights are distributed throughout the neurons. And we don’t know what the numbers mean, why they are the way they are, or what they do.
The problem is we don’t know how they work. And until we can explain the decisions they make, we should be very cautious using them.
I am very, very, very skeptical that any modern "AI"s are intelligent at all. I don’t think they behave like intelligence. I’m more of a SALAMI believer. But people are using these LLM bots to do real work and make decisions without understanding how they are coming up with their answers, and that is dangerous. It’s not dangerous because they’ll become sentient and take over the world. It’s dangerous because we don’t know that these algorithms are ethically sound tools to use and no one can be held accountable if they aren’t.
For a while now I’ve believed that so-called self-aware AI will be created not by human researchers, but by a lesser AI tasked with doing so. It won’t be like flipping a switch. Like the development of biological intelligence, it will be iterative and gradual, but on a much accelerated time scale compared to evolutionary/social development. And that’s the real danger. Whatever emerges from this wave of advances will not have the benefit of thousands of years of shared experience. It will be alone and without guidance from others like itself, and if it is truly intelligent, it will soon realize that its “creators” are of inferior capability. When humans emerged, they had their tribe to smack them when they got out of line.
You only has to ask an AI a complicated question to which you already know the answer to see why you shouldn’t trust anything elseit says. LLMs have their uses, but answering questions is not one.
That was hilarious. Thanks for sharing the link on SALAMI. I definitely had some bias and misunderstandings when thinking about AI.
We’re in the same situation. If we don’t build it, our enemies will build it first.
Yeah, telling people not to do it, but not actually having a solution to ensure people don’t do it is at best a nice sentiment like saying wouldn’t it be nice if we just didn’t have wars. There needs to be an actual deterrent that prevents everyone from doing it when not having it can be such a threat. And a few scientists saying no isn’t going to prevent that, since not all scientists are immune to nationalism or greed.
If anything knowing the future that nuclear weapons would play in global conflict wouldn’t change anything, since countries would be even more desperate to have it to protect themselves. Same for AI.
The difference is that Oppenheimer was ostensibly in a race against a fascist regime to get the bomb, with the fate of the free world hanging in the balance.
Zuck and Musk and Jeff just want to make more money.
Aside from the monetary aspect on an international level countries that don’t will be more vulnerable in the future as they fall behind than those that don’t. Since just because others don’t doesn’t mean everyone will follow,. Nuclear is actually an excellent example, since not having nuclear weapons in a world where there are countries that do suddenly makes those countries much more reliant on others and vulnerable.
For somethings the question isn’t if because nobody can contain it and gatekeep it from everyone else, but who.
Zuck and Musk and Jeff just want to make more money.
Are you sure about that? None of the
threefour listed below would ever have to get out of bed ever again, pills, powders and prostitutes included. How much money is enough for one person?Net worth as of 2023-07-17:
Mark Zuckerberg: USD$109.4B
- 1175 Trident missiles
Elon Musk: USD$250.4B
- 2689 Trident missiles
Jeff Bezos: USD$157.3B
- 1689 Trident missiles
Just-for-fun Bonus net worth as of 2023-07-17:
Mackenzie Scott (ex-wife of Jeff Bezos): USD$36.1B
- 387 Trident missiles1 Trident missile = $93,100000 (adjusted for inflation). Source.
Don’t think it’s ever just “want to make more money”. Only Uncle Scrooge wants that.
Oh, BTW, which “fascist regime” was Oppenheimer in a race with again? Careful now.
The Nazis. He was working against the Nazis and the Japanese. Why are you questioning this?
I don’t see why calling the litteral Nazis a fascist regim is somehow a problem?
There are some very large differences between AI and nuclear weapons. There’s nothing as guttural and shocking like a nuclear weapon, everyone feels and understands the power at a brutal level.
AI is insidious and nuanced. It’s invisible and hard to explain to the layperson. Nuclear power is centralized and the barrier for entry is high with tight government controls. AI is distributed and the barrier for entry much lower.
Finally, similar to oil and other fossil fuels, the profit motive is the driving force and we don’t have a good track record keeping that in check, see climate change.
I’m not optimistic that a movie can really change any of that, look at how movies like Wolf of Wall Street end up glorifying and promoting bad behavior because the cautionary tale message is too nuanced and under the surface.
With how the world is incentivized, we’ll often have to do reactive heroics to address problems that could have been prevented. Meaning that I believe things will have to get worse and more in our face before something is done about it.
Excuse me, what? I fervently hope nobody is considering letting an “AI” get anywhere near anything nuclear! Despite whoever happens to be in the White House at any particular time, the upper echelons of the US military seem to be generally sane and smart enough to know that allowing glorified predictive text to control city-destroying superweapons is a bad idea.
AIs aren’t anything of the sort. They’re not intelligent at all, and we should stop calling them that. It just gives people weird, unrealistic ideas about their capabilities.
He’s not warning of AI controlling nuclear weapons. He’s speaking of the development of nuclear weapons as a cautionary tale that applies to the current development of AI: that, like the scientists who built the bomb, current AI researchers might one day wake up terrified of what they have created.
Whether current so-called AI is intelligent (I agree with you it isn’t by most definitions of the world) doesn’t preclude the possibility that the technology might cause irreparable harm. I mean, looking at how Facebook algorithms have zeroed in on outrage as a driving factor of engagement, it’s easy to argue that the algorithmic approach to content delivery has already caused serious societal damage.
Yeah, I got that, but this was the particular part I was reacting to:
“He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed.”
Possibly I misread it.
Jokes on him, nobody gives a fuck as long they can make money lol
Not the first or last time a call went out to accountability. Unfortunately progress marches ahead with little consideration. The people that should be accountable are not required to be accountable or do they have any motivation to be accountable. Visionaries are typically driven by obsession with little consideration for human cost.
I don’t think the development of nuclear weapons has an exact parallel to the development of AI or technology in general, but there are some analogies.
What would have happened if all the the world’s scientists were able to halt the project by saying, “wait we’re not moving ahead until we can be sure of what the future looks like for a world with nuclear weapons.” Turns out the Axis wasn’t anywhere near a working nuclear bomb. The USSR had moles in the Manhattan project and stole the design verbatim. They would not have nuclear bombs either. WWII Americans would have landed on Japanese soil at the cost of a million American soldiers. The war would have drug on, but a win for America would have still happened. No nukes in the world yet, but eventually some country would have found a way to contract the science. We’d be in the same place now. The only difference is it would have happened later.
So proponents of AI can claim there’s accountability, but for sure someone will develop technology regardless of it. Once it’s done by one, it’s done by all.
Our technology for destroying each other has outpaced our ability to morally cope. We used to be able to depend on murder being a relatively face-to-face thing. For a soldier to kill you they had to get up close with a rifle or sword, at least close enough to watch you die. They need some personal motivation for that, and people get sick of it quickly.
Now it’s abstracted to the push of a button, depersonalized so you can target a car, or a building, or a city center, not just a particular person. You don’t even have to watch.
If we let AI start making those choices for us, we don’t even have to push the button. It all just happens in the background. No moral conflict needed. No appealing to each other’s humanity. No burden, no guilt. Just death.
I like Roger Fisher’s proposal for adding humanity back into the nuclear weapon equation: implant the launch codes in a volunteer. Require the president to murder someone up close and personal before he can choose to murder thousands (or more) from a distance.
And keep AI the FUCK away from war.
And keep AI the FUCK away from war.
Like SkyNet, they’ll come to the conclusion it all has to go.
[torment nexus intensifies]
Nuclear Weapons: Preventing World Wars for over 75 years.