Wed Jun 05 2024

All my anxieties

Let me tell you how I spent my years since around 2019: scared shitless.

In many ways, I had been thinking about this problem for a long time: the tragedy of the commons, tied together with rapid growth in AI capabilities. If you’re reading this in 2024, then you feel the rumble. The pace of development in AI has been insane1 – and I don’t mean the fake AI (it was mostly branching statements and heuristics) that as an industry, we oversold for at least a decade or two2.

I mean we’re on the cusp of the real deal, honest to god, “you’re going to have to define intelligence for me because this might be it” type of AI.

Enter: all my anxieties.

We’re horrible at this. As a species, I mean. Look around you, and think about all the coordination problems you see. Wars, climate, horrendous resource management that’s honestly put big parts of the globe at a massive disadvantage. Like I said – absolutely horrible at this.

A quick aside: I mentioned elsewhere that I’ve been reading Richard Rhodes’ “The Making of the Atomic Bomb”3. Not because I’m fascinated by WW2, or particularly interested in the minute details of how we got to fission. I’ve been reading it because we’re at a similar moment in time. We’re living through a time when we’re discovering a source of power so large, that you are forced to play the game, even if you think the outcome of the game may be disaster.

So, AI. Most people can’t distinguish the output coming off chatGPT and a human writer. That’s quite massive, and it only becomes more massive the more time you sit down to ponder the implications. ChatGPT and diffusion models have caused a tremendous shift in some areas already – ask your artist or writer friends how they’ve been sleeping.

And that is the tip of the iceberg, because AGI4 is, according to not just me but people even closer to the iron, working at OpenAI, Anthropic, DeepMind – quite close indeed.

Did I mention we’re horrible at this?

I’m a software engineer. I know what it means to fix an airplane while it’s flying5. Sometimes you need to build something as the thing runs. Sometimes you need to tweak something as the thing runs. That’s horrible and bad, and I wish we didn’t have to live like that but; alas.

In typical software engineering, when you are asked to fix the finance system as it is running and you are changing live production data, or running a migration for millions of customers, you at least know what the system does.

I am not exaggerating when I say that it is extremely hard to understand how the AI models of today work. And I don’t mean “it is hard for me, dear reader, because I’m not that bright” – I mean it is hard for the brightest of people (I’m not in that group, to be clear). Intepretability is a massive research field in AI right now because we’re trying to figure out things like “why was this thing not capable of doing X at scale Y, but flies through it at scale Y*2?”

I'm in danger

You can explain how an AI model runs and what it does what it does. But once you stop being able to tell why it became really good at things you didn’t anticipate it being good at, then we’re in trouble.

I digress.

Progress is unstoppable

Remember the letters about pausing AI progress from last year? Yeah, I remember signing those what feels like 20 years ago. Here’s reality (and something I learned reading through how we go to the atomic bomb): once the cat is out of the bag, it is really hard to put the cat back in. And we’re talking about a mathematical cat, built out of matrices, fed gigawatts of energy and trained on thousands and thousands of purposely-built GPUs. That baby isn’t going back in any bag.

It is really hard to say no to something that can at the same time and possibly in the same electrical breath completely revamp society (something something flourishing), and beat it down to a pulp (something something misinformation, the truth is gone, sorry, you figure it out).

I don’t need to teach you, dear reader, about the tragedy of the commons6. Even if you don’t have a definition for it, you see it in your day to day life. If “we” (the proverbial we) don’t build it, “they” (the proverbial they) will, and oh my god “they” can’t have it first, can “they”? So we build it because we must, of course. Shrugging emoji.

Like I said, progress is unstoppable.

So you play the game

When I realized how AI would be an issue (in addition to being pretty good at proofreading my stuff, or writing the bits of code I didn’t feel like writing), I started thinking about what could, in fact be done.

I start with a question before I provide any answers (and don’t get your hopes up, I don’t have a ton of answers): if you heard about nuclear fission, atomic chain reactions, and the Manhattan project as the nuclear discovery was taking place, what would you have done? Some people would simply choose to let others figure it out. I’d like to think I would have written a piece (in a journal? Who even knows) about all my anxieties about this new kind of energy and its obvious societal impact. Surprise, a blog post!

I think you should get involved. Even if you don’t know any software engineering, physics or mathematics, I would argue you should get involved. The first thing I’d love you to do is learn about what the real possibilities of AI/AGI/ASI are. Extrapolate a little, and think about the future. Dip your toes in the deep, deep seas of “I can’t imagine anything past this exponential”-induced dread.

About a year and change ago I tweeted “Most things that cause me anxiety are coordination problems.”. That’s still true. The invention of artificial general intelligence feels inevitable, and it feels impossible to coordinate. Government doesn’t move rapidly enough to do anything meaningful in the time intervals we’re talking about7. AI labs are effectively racing for capabilities, for compute, for energy and for data (ahem, I mean our data). The people who have been trying to figure out AI alignment have been pushed out or simply quit.

If we don’t do something, will they? Probably not. So you do something. You learn. You teach. You communicate. You build better. You use the thing for the positive outcomes, and steer it away from the negative outcomes. You write the stream of consciousness blog post non-stop in 30 minutes and hit publish.

We need to figure this shit out.

I hit publish.

Footnotes
  1. One order of magnitude every 2 years?

  2. Your favorite product wasn’t AI. It was a collection of if statements.

  3. It is an absolutely fascinating book, and I highly recommend it, not just for AI practitioners, but for anyone into physics and, ugh, coordination problems.

  4. A definition of Artificial General Intelligence is honestly not in the cards for this post, but let’s go with the massively simplified: a system that can do about as well as the average human at all tasks the average human typically performs. Defining superintelligence is definitely out of scope here but let’s go with “you should be scared shitless too” for now.

  5. I swear this isn’t a Boeing joke. But it could be.

  6. But if we must.

  7. Lots of people (including AI lab leaders) believe there’s a high likelihood that AGI is a thing in under 5 years. Dario Amodei of Anthropic has said 2026 before. That’s… quite soon indeed.