Breakwater Debates: A Six-Month Pause on Giant AI Experiments

Over the course of a few days, two Breakwater Strategy colleagues –– Arik Ben-Zvi and Steven Weber –– engaged in an extended email debate over the merits and downsides of the now-famous Open Letter: Pause Giant AI Experiments put forward by the Future of Life Institute (FLI). 

That exchange is being shared in The Intersection (edited for brevity) in the same spirit that inspired it in the first place: Not to come to a definitive answer, but to ask pointed questions and try to frame the public debate in a constructive manner. 

Both Arik and Steve are technology enthusiasts, and share a deep conviction in the upside potential of large language models and #generativeAI. Both also see knowns and unknowns in the risk landscape and recognize the potential for harms. They have a constructive disagreement about what to do with all that right now, and through this email debate, tried to come to some greater understanding by arguing (for the most part, respectfully) about it.

Enjoy –– and feel free to join in on the debate with thoughts of your own.

No alt text provided for this image
Arik Ben-Zvi (left) and Steve Weber (right) of Breakwater Strategy

ARIK: 

Let’s start by trying to define the scope of this debate. I think we both agree that generative #AI holds enormous potential. We also agree that it holds all sorts of potential risks. So we come at this from a shared perspective that one of the great tasks of this generation is to maximize the upside while mitigating the downside. The question is how best to do so. 

We are trying to make this plane safe while it’s already flying, which is tricky. The question is do we try to ground the plane? Force it to fly much slower while we make it safe? Or do we let it fly and try to make necessary adjustments if and when issues arise?

STEVE: 

Agree with that framing.

The plane, such as it is, has definitely taken off. Which isn’t necessarily a terrible thing. Generative AI models are sufficiently complex that there is no meaningful way to “make the plane safe” while it’s on the ground. Or in a wind tunnel simulator. So we need to ask: How fast do we fly? How many new experimental aircraft models do we want to launch and how quickly? And how many passengers should we put on those test flights?

I want to see if we can take one argument about this question off the table: that all of this is simply inevitable, and resisting the inevitable never makes sense. My old colleague Kevin Kelly wrote a book some years ago titled What Technology Wants, as if technology has its own intention and the job of humans is to understand and yield to it. Given events of the last few months and the competitive commercial and geopolitical energies that have already been released, a less dramatic version might be to say “it’s already too late”.

I think these arguments are nonsense. 

The same precise arguments were made about nuclear weapons and nuclear energy in the mid 1940s –– the technology was so powerful with energy too cheap to meter and weapons too powerful for any government to eschew –– that a future of mass nuclear proliferation and fission energy was inevitable. There were supposed to be a hundred nuclear weapons states, and no coal power plants left, by the mid 1960s. Obviously it didn’t turn out that way.

Can we take the “inevitability” argument off the table, and agree that humans have a real choice here? 

ARIK: 

Those are all great points, and actually offer a perfect segue into two of my biggest concerns about the proposed six-month pause:

a.) Why now? and,

b.) Why a pause?

I agree that nothing is inevitable. Societies have enormous power to shape what does and does not get scaled when it comes to technology. 

Indeed, it is precisely because I believe in the ability of society to influence the development of technology that I question drastic calls like the demand of an AI pause. As you say, these products that have come to market have been gestating for years. It’s not like this technology burst from nowhere over the past few months, nor have I seen evidence that if we don’t pause now then within a few more months it’s going to be “too late.” 

So help me understand: Why now? Why a pause?  

STEVE: 

I don’t think we should fixate on the (somewhat naive and oversimplified) notion of a six-month “pause”. 

The people who penned the letter represent a small and specialized segment of those who are worried about the downsides of generative AI. There is a strong tilt toward the existential risk community, which is a bit fixated on an extreme scenario that was originally popularized by Nick Bostrom (The SuperIntelligence or Paperclip scenario). I don’t think they have a particularly deep appreciation of the politics or the commercial pressures and I believe the idea of a six-month pause is really just a placeholder for a loud statement that would catch the attention of policymakers and the public. That signal translates to “do something” and, more broadly, spotlights (with media savviness) the need to allocate more resources toward working on the safety agenda.  

In that context, I’m going to flip the “why now” question right back to you, but invert it. Why wait?  What is the argument for going further before asking ourselves how we intend to travel down this path and why? 

ARIK: 

Why wait to slow down innovation? Largely because I don’t know what we would do with the time that pause or slowdown (whatever the time period, real or imagined) would afford us; and because I really fear that if we tag this technology as so problematic that it needs to be artificially slowed down –– no pun intended –– it will have a chilling effect on investment and research for the long-term. 

Let’s say we have the six-month pause, or some more generic “slowdown” agreement. What problems are we focusing on addressing before allowing innovation to get back up to speed? Is it making sure that a super-intelligence doesn’t kill us all? Preparing for economic dislocation from automation? Reducing incidence of AI hallucinations? Figuring out how to mitigate disinformation?

It concerns me that we are setting ourselves up for an endless pause/slowdown because with so many potential risks to worry about the technology will never be “safe enough.”

STEVE:

Let’s clear up some terminology. “‘AI safety” has come to be associated with the existential risk scenarios (AI that kills us all) and the “alignment problem”, which is the challenge of somehow making sure that an AI is actually pursuing the goals that humans intend it to pursue (i.e. not turn the universe into paperclips). I agree strongly that this isn’t an achievable threshold. Humans themselves aren’t aligned on what humans want.  

To me, the real focus for AI safety should be about  engaging in a thoughtful and grounded risk-benefit calculation. Every technology has something on both sides of the ledger, and the question is: What’s a reasonable “set point” for that balance, and how do we measure it?

ARIK: 

Ok. That’s all fair. 

But I am still stuck on the question of why not allow this technology to evolve, advance, and scale, while regulators, academics, and civil society carefully observe, analyze, and, when needed, act to address issues as they manifest?

Obviously, the counterargument is that industry will move too fast and that regulators will never be sufficiently nimble. But to me, the response should be: “Let’s get better regulators!”

I know it sounds simplistic, but I’d say that if you are serious about making breakthrough technology safer for society, the focus shouldn’t be on trying to put up more and bigger speed bumps, but rather creating more and better traffic cops.  

STEVE:  

Some harms are much more easily managed before they manifest at scale. Here’s an analogy. If I developed a new pharmaceutical with a bunch of known and serious side effects that the medical community does not yet know how to minimize, would you say we should license it for over-the-counter purchase at your local drugstore, in order to find out if there are other, even worse side-effects that we haven’t yet discovered? 

The pharmaceutical companies themselves wouldn’t want that. The companies pushing for very rapid commercialization and widespread deployment of LLMs shouldn’t want it either. This technology is progressing at an almost unbelievable rate. The upside potential is remarkable in so many respects. But we could be thrown right back into another “AI winter” if serious harms manifest with them in the near-term. No one wants that outcome. 

Pharma has a decent model for how to move deliberately through a risk-benefit analysis using staged clinical trials.  

ARIK: 

So let’s explore the clinical trial model. For pharmaceuticals, it’s pretty straightforward. The effects of the medication plays out within the human body and the human bodies are reasonably well-understood (you can build on prior research), can be mimicked through animal testing, and ultimately you can use human subjects whose physiology can be closely studied. But does that apply at the society level? Clinical trials showed that opioids are safe and effective as pain medication, while failing to anticipate the way those drugs would be abused at the society level. 

So how do we apply this model to AI? What will the test subjects be –– individual consumers, small population groups, something else? What will the controls be? Will it somehow be double-blinded or otherwise set up to reduce noise in the data? And what will be evidence of safe versus not safe?

STEVE: 

There is a lesson to be learned with regard to opioids. It wasn’t just a science or a societal problem; it was a knowledge and communications and incentives problem. The basic science was clear on the point that opioids were addictive. They were used very selectively. In 1980, a one-paragraph letter was published in the New England Journal of Medicine that stated –– with shockingly little supporting data –– an opinion that opioids in practice didn’t appear to be as addictive as first thought, arguing that the real problem was that physicians were afraid to use them, thereby under-treating pain and causing unnecessary suffering. You may not like the idea of “more research”, but this is exactly what was needed at that moment. Instead, the medical establishment rushed forward for the easy answer, and the widespread consequences continue to plague the world today. 

Are we really going to repeat that pattern? Is the story of “move fast and break things” really going to play out again so soon? 

We can certainly continue work on LLMs in a more controlled fashion. Here’s a cartoon version of a clinical trial program that I think can work:

📎 Stage 0:  Internal testing and refinement of models and products, inside the company only. The goal is to identify and fix the most glaring problems. Maybe this gets augmented by internal red teams. 

📎 Stage 1:  Screening for Safety. A small, highly contained group of external testers. The goal is to test for safety and major risk factors. 

📎 Stage 2: Treatment vs. Control. This is a larger trial where you now look to assess efficacy. How much better is the AI model, in fact, than what we already have and use? This puts the safety risks in a meaningful context. It’s not worth accepting much risk, if the new tool isn’t really that much more efficacious than old tools. 

📎 Stage 3:  A larger scale test of the “results” of Stage 1 and 2, to see if those findings hold up at scale, and to start to see ’side effects’ that are rare but important. 

[RELEASE TO THE WORLD]

📎 Stage 4:  Post-release monitoring.

That’s not a detailed blueprint, but it’s a framework that makes sense for responsible innovation. And just to be clear: I’m not suggesting that, as with pharmaceuticals, each stage has to last for any particular defined period of time, or that an agency like the FDA needs to give its blessing. 

I’m simply suggesting that this kind of stage gating logic could form the basis for a self regulatory regime. Six months might just be enough time for the top ten or twenty companies in the world to agree on the details in a way that would benefit their and society’s collective good at the same time. Shouldn’t we try?

ARIK:

It’s a compelling argument to be sure. I still worry about the tendency of well-intended safety efforts like this to devolve into something else… box-checking, bureaucracy, regulatory capture, etc. I know that your concept doesn’t invite or require those things, but like all innovations, regulatory innovation can create its own harms –– just like technology.


In any event, Arik and Steve can certainly agree that this issue is going to be with us for a  long time to come. AI is starting to change the world. That change is only going to continue. It will be our entire generation’s shared responsibility to bring this technology forward the right way.

If we do, sky’s the limit in terms of what we humans can accomplish. If we fail… well, perhaps we’ll all make for good paperclips. 🖇 

Until next time!