The Supreme Court has been keeping us busy, so we’re a few weeks late to discuss Leopold Aschenbrenner’s Situational Awareness paper here. Leopold did a good (and long, at 4.5 hours) episode of the Dwarkesh Podcast to accompany the paper’s release, and a rough summary is:
There’s currently a race to get to AGI and then ASI (artificial superintelligence).
Once we hit ASI, you’ll have a system that can (a) outperform humans on any task where intelligence is the limiting factor and (b) train itself to get better. Because of (a), (b) will happen in ways that humans don’t have insight into. This is analogous to how AlphaGo accelerated away from humans by playing training games against itself.
Whoever gets to ASI first will open up an insurmountable lead.
This creates extremely strong incentives to steal model weights from the leading AI labs. Billions of dollars of training boils down to producing a file with a bunch of numbers in it. If you steal that file, you get most of the benefits with none of the effort.
AI labs are tech companies, not SCIFs. They’re pretty elite as tech companies go, so by median corporate IT standards, it’s hard to exfiltrate data from them. But by state intelligence agency standards, it’s not hard.
Therefore if the world’s most active state intelligence agencies (CIA, China’s MSS, Mossad, MI6, etc.) are halfway competent, we should assume they have already compromised every major AI lab.
If we want freedom to survive, the US needs to get ASI before China does.
Given #4, the only way to do that is to lock down the AI labs. The US may nationalize the companies building frontier models, or do something similar to treat AI research less like free-market competition to develop new tech and more like the Manhattan Project.
This all hinges on #3, which is not a given — insurmountable leads don’t really endure in the real world. And we step off the train at #8. But notwithstanding all that, it’s a thought-provoking set of arguments.
It does reflect a pattern, though, which is common any time a dangerous new technology gets into the wild. It’s a pattern that’s familiar in talk about guns. It goes:
A specific panic serves as a catalyst.
Example: for guns, historically this has been mass shootings. In the AI case, it’s China. (For guns the panic has sporadically also been China. See “OSD 266: The buffalo jump”.)
Judge the status quo by its results but the proposal by its intentions. There’s an assumption that locking down technology to only government-approved uses will be an orderly, just process. The possibility that government control will produce worse outcomes than the status quo is never seriously considered.
Example: fiddle with data to find a correlation between murder rates and legal gun ownership, while assuming that if government locks down access then (a) you’ll get compliance from those who would otherwise have committed murder, (b) the gun ownership process will be reasonable (see: the process for buying a pistol in New York), and (c) this won’t create second-order problems (gun task forces, the effect of gun laws on the Fourth Amendment and the carceral state).
Innovation dies due to the controls on the technology.
Example: the basic operating cycle of today’s guns is unchanged from 100 years ago. The most innovative spots in the industry are the least regulated (optics, content creators, 3D printed firearms).
Problems with the tech that would have been solved by more innovation are instead used as examples for why it needs to be locked down even more.
Example: static pistol designs being used to justify California’s Unsafe Handgun Act.
This doesn’t mean it’s safe to summarily dismiss fears about powerful new tools. Just because something has tended to work out fine in the past doesn’t mean it will in the future. But that historical reality is a good starting point. Powerful, dangerous tools — from guns to the internet to free communication — have always disrupted society when they arrived, and that has always been messy. But the results have been best when people have let creators double down on what’s working and innovate the problems away. When people took the opposite path and entrusted the powerful, dangerous new tool exclusively to a central authority figure, that has almost always come to be seen as a mistake.
Previous OSD writing on related topics:
This week’s links
Guntubers do force-on-force training
Polenar Tactical, Ian McCollum, Bloke on the Range, and a bunch of others.
Ammo vending machines
Sure why not.
More about Open Source Defense
Merch
Grab a t-shirt or a sticker and rep OSD.
OSD Discord server
If you like this newsletter and want to talk live with the people behind it, join the Discord server. The OSD team is there along with tons of readers.
> AI labs are tech companies, not SCIFs. They’re pretty elite as tech companies go, so by median corporate IT standards, it’s hard to exfiltrate data from them. But by state intelligence agency standards, it’s not hard.
Back when I lived and worked in Silicon Valley, and had a blog where I enjoyed being provocative, one of the questions I liked to pose to people on Twitter was "Which of your coworkers are spies, and which country do they work for?".
People frequently accused me of being a conspiracy theorist just for asking this, but, and I'm saying this sincerely and not just rhetorically, the real tinfoil is believing this _wouldn't_ happen.
For a concrete example: Google is one of the most strategically valuable companies in the world. If nothing else: they have full read access to ~90% of all the emails ever sent. That is the most valuable thing on the planet that spies could spy on. If a spy agency _didn't_ have people infiltrating google, _that_ would be insane, because it implies that they're a grossly incompetent waste of money.
Even to this day, I am somewhat surprised at how cavalier most FAANG-tier people are regarding the very real security threats they must necessarily face on a daily basis.
> Ammo vending machines
It has always seemed absurd to me that the gun control people in this country focus on controlling _guns_ but more-or-less ignore ammunition. It seems like from a practical perspective, it would be more effective. After all, a gun without ammo is a paperweight, but ammo without a gun is a bomb.
But hell, don't point out the enemy's blind spots for them, eh?