3 Comments

> A new technology catches the eye of Sauron once it becomes clear that the tech is powerful enough to be dangerous.

AI is a perfect example of this.

IDK why but I assume there's a large overlap between the readership of this blog and (the cool parts) of the lesswrong community. But, so, lesswrong community online among other things has been freaking out about "AI risk" for 20 years now. And nobody cared, and everyone thought they were weird, and ignored them.

Then, suddenly, AI becomes a useful thing, and Sam Bankman-Fried is briefly one of the richest in the world. And then Power starts doing what it does, and notices, and freaks out, and attempts (and, from my point of view, has overwhelmingly succeeded) to take it over.

Expand full comment

> If search engines were invented today, we’d be having lengthy debates about search engine alignment.

Idle thought, but, we kind of _do_ have those lengthy debates these days. That's half of what "Trust and Safety" _does_ at Google, is making sure the search engine results are "aligned" with what they think is correct.

Expand full comment

Do they actually focus that much on trust-and-safety-ing the organic search results? Anecdotally the focus seems to be much more on ads and social media products.

Expand full comment