A few months after “OSD 246: Eroom’s law”, we’re back discussing another breakdown of another proposed AI ban. This one, like Steven Sinofsky’s a few months ago, is a good one, and it’s focused on a California bill that would “create a sweeping regulatory regime for AI, apply the precautionary principle to all AI development, and effectively outlaw all new open source AI models”.
What does it mean to be a covered model in the context of this bill? Basically, it means developers are required to apply the precautionary principle not before distribution of the model, but before training it. The precautionary principle in this bill is codified as a “positive safety determination,” or:
a determination, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative model that a developer can reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications.
…
The AI safety advocates who helped Senator Wiener author this legislation would probably retort that AI models make all of these harms far easier (they said this about GPT-2, GPT-3, and GPT-4, by the way). Even if they are right, consider how an AI developer would go about “reasonably excluding” the possibility that their model may (or “may come close”) to, say, launching a cyberattack on critical infrastructure. Wouldn’t that depend quite a bit on the specifics of how the critical infrastructure in question is secured? How could you possibly be sure that every piece of critical infrastructure is robustly protected against phishing attacks that your language model (say) could help enable, by writing the phishing emails? Remember also that it is possible to ask a language model to write a phishing email without the model knowing that it is writing a phishing email.
…
Imagine if people who made computers, or computer chips, were held to this same standard. Can Apple guarantee that a MacBook, particularly one they haven’t yet built, won’t be used to cause substantial harm? Of course they can’t: Every cybercrime by definition requires a computer to commit.
That last paragraph parallels a critique of Heller and Bruen that has come both from people who like gun rights and those who don’t.
Heller imposed a “common use” test, saying that a gun law is unconstitutional when it affects guns that are in common use for lawful purposes. Bruen enshrined a similar test, saying that gun laws must be grounded in the “text, history, and tradition” of the country — i.e. the law stands if it has an analog in the nation’s text, history, or tradition.
What both of those do is freeze in amber the technology at time time the standard is set. It then becomes very hard to make fundamentally new technology (e.g. to explore fire-by-wire to its fullest promise) or to strike down bad laws that fell into the amber before it hardened.
Something to think about in future court cases. Bruen was a big win. But it puts into perspective how much gun regulation is already on the books — the legal standard that gun rights advocates were overjoyed to see imposed is the very same legal standard that tech folks realize would be catastrophic for innovation if applied to AI.
The logical conclusion is that the laws on the books today have hobbled gun innovation so deeply that we don’t even realize we’re missing out. The best time to fix that would have been decades ago, by preventing the laws from passing in the first place. The second-best time is today, by innovating relentlessly. Keep it up.
This week’s links
Lana Del Rey with a P365 on Instagram
Normalization.
David Yamane on a new book of gun orientalism
What I call “The Master Narrative of Democracy-Destroying Right-Wing Gun Culture and Politics” is not just a scholarly paradigm in gun studies. It is arguably even more evident among media elites and other cultural gatekeepers.
More about Open Source Defense
Merch
You can buy a shirt with the artwork from this week’s newsletter. Only available for this week, gone forever when next week’s newsletter comes out.
OSD Discord server
If you like this newsletter and want to talk live with the people behind it, join the Discord server. The OSD team is there along with tons of readers. See you there.