Where Is an AI Strategy To Be Found?

Scarcely a year ago, the most popular computer science course at most major universities was about #blockchains. Few people in the business community had heard much about general purpose transformers (#GPT) or Large Language Models (#LLM). Machine learning systems were certainly playing a meaningful role in back-office and other behind-the-scenes functions like inventory optimization –– but the idea of an #AI technology “moment” equivalent in scope and significance to the beginning of the world wide web or the mobile era? That seemed fanciful, and still the stuff of science fiction movies (mostly dystopian, at that).

Fast forward to early 2023, and the story looks very different. The explosive growth and tangible impact of LLMs in public use, along with the arms race among cloud providers to incorporate these technologies into search and enterprise software products, raises monumental and possibly generational questions to grapple with –– likely for years.

A recent blog post by GitHub showed that, at least in software engineering, the future with LLMs is coming faster than most could have imagined. The code hosting platform –– used as a basic tool for developers to test and collaborate on their coding work –– launched GitHub #Copilot to the public less than a year ago. The world’s first at-scale AI developer tool provides subscribers with “autocomplete” options as they code. It’s basically #ChatGPT for software languages, and it works remarkably well (though, as with the spoken language equivalent, there are errors and security flaws). Still, Copilot now accounts for a staggering 46% of developers’ code across all programming languages on GitHub. Put simply: nearly half of new GitHub code is being written by machines.

You don’t have to fixate on that halfpoint threshold to recognize that virtually any language-intensive activity –– from copywriting to lobbying to creative arts to the drafting of laws –– is either already facing or soon will face a wave of machine learning integration. That justifies a level of unease about the prospect of not only individuals’ day-to-day work, but the fundamental value propositions of companies and individuals alike, being disrupted by AI.

It’s not surprising, then, that 37% of Americans feel “more concerned than excited” about the integration of AI in their everyday lives –– more than double the 18% of Pew Research Center survey respondents who are more excited than concerned. 

We’ve heard colleagues, clients, and friends from C-suites and boards talk about the need for an “AI strategy” and for “AI governance and oversight mechanisms” inside their firms. We believe that level of abstraction may not be the best approach. AI is a colloquial term that represents a large and diverse bundle of technologies. As such, it is more like a description of how human beings perceive certain kinds of outputs from machines than it is about what the machines are doing and how these capabilities impact the enterprise. Still, it’s easy to understand the motivation behind the ask. 

It’s trite but true to say there is no single formula, best practice mantra, or easy answer. We’ll be sharing a number of ideas and insights on this question over the next few months (as will many others, of course). For the moment, we offer just a very simple observation and starting point about the nature and directionality of the search for answers. It’s not only that there is a massive rush for strategy; it is that the demand for strategy is sitting at the intersection of (at least) four distinct Vectors with meaningfully different priorities, anxieties, time horizons, and risk appetites.

For the moment, here’s a vastly oversimplified scheme to help start to illustrate how AI is pushing and pulling on each of the Four Vectors:

  •  The State naturally focuses on systemic risk and national security, as well as the medium-term implications for labor markets and even –– in some cases –– political culture. If LLMs are widely adopted inside a country’s major enterprises, and then it turns out the models produce under some circumstances similar or correlated errors, is the State then exposed to systemic risk within critical sectors like banking, manufacturing, and government services? How secure are these models from adversarial machine learning attacks? How much disruption is in store for labor? And can governments stand back and rely on society to manage the mis- and dis- information challenges that generative AI is already putting in front of us?
  •  Capital Markets naturally focus on costs, productivity, and industry structure. LLM models are expensive to train and run at scale. The best machine learning scientists and engineers are numbered not in the tens of thousands, but in the hundreds, or at best thousands –– which means the talent capable of building state-of-the-art models is highly concentrated. The general productivity impact of widespread model deployment could be extraordinary –– or perhaps not. We simply don’t have sufficient evidence to make good judgements yet. Is a dollar invested in generative AI more productive right now than a dollar invested in conventional and familiar digital transformation initiatives, which in most industries still have a very long runway ahead of them?
  • Society (as the prior Pew study and other polls indicate) is expressing understandable anxiety about the impact on labor, culture, and —– brewing under the surface —– what it means, distinctively, to be a person. For example, the media has recently highlighted cases where human provocateurs poke ChatGPT into insulting and even aggressive “behaviors” in the course of long and drawn-out arguments. People will naturally tend to confuse speech with thought, since the primary way human beings have access to each other’s thoughts is through what we say to each other. What if we discover that aggression is easier to elicit from LLMs than insight, empathy, or other more prosocial forms of engagement? Would the other benefits be sufficient to absorb the costs to society of widespread use?
  • And finally there is the business itself, the Organization –– the primary locus of decision making deployment, where LLMs will either be incorporated into general use and specific applications; or, again, perhaps not. Organizations are grappling with how to best make those decisions, and with the general challenge of a technology evolving so quickly that today’s decision could be rendered obsolete in a matter of months or less. Few senior decision makers can likely point to experience with that level of uncertainty and dynamism in decisions that could be strategically critical and even existential for the Organization. Looking for a best practice set of guidelines, or even guardrails, simply isn’t an option at present.

Even with these vastly oversimplified characterizations, a poignant observation stands out: there is no existing algorithm or even rule of thumb to reconcile amongst these perpendicular, and sometimes conflicting, demands. A secondary and interesting observation, derivative from that, is that solving for this problem over time may turn out to be a good example of where human intelligence will continue to excel over machine learning. But even that’s too soon to say for certain.  

What is certain, is that the idea of landing on a single “AI strategy” –– and then executing on it –– is probably not a good mental model for how to move forward with any of the Four Vectors. This technology is going to require a much more nuanced, ongoing strategic conversation. That will include small experiments, clear communication, dynamic risk management, and sharing of lessons among firms and across the Vectors.

It’s often said that for every one dollar spent on technology, organizations end up spending ten more figuring out how to incorporate that technology to good effect. This time –– and with this particular technology –– focus, patience, and disciplined attention to how the Four Vectors are experiencing the AI revolution may be even more pronounced than that tenfold cost.