Among the many weird, wonderful, and slightly terrifying experiments with ChatGPT recently, one anecdote stands out: a Democrat member of the House reading into the Congressional Record a speech —– written by ChatGPT –– in support of a bill to create a joint US-Israeli Artificial Intelligence Research Center.
Over the course of just a few months, we’ve witnessed the emergence of text-to-image generator DALL-E and the large language model ChatGPT, the world’s first broadly public-facing generative AI applications. That “AI revolution” was quickly followed by a tsunami of interest. The technology sector, public and non-specialist media, regulators, and investors have all moved quickly to put their (DALL-E generated) stake in the sand.
That revolution has been brewing for years, with machine learning systems somewhat quietly optimizing business processes in the back end of many organizations. At the same time, Machine Learning (ML) scientists have been working to address downside risks, such as algorithmic bias, and other ethical concerns. There’s even a small specialist community (of which #ElonMusk was once a part) focused on reducing the existential risk of “‘super intelligent” AI systems escaping the will of their human creators and taking over or destroying the planet.
Fast forward to 2023, and the risks and opportunities of rapidly improving AI/ML are suddenly at the forefront of almost everyone’s agenda –– for society, for government, for business, and for capital. It presents a distinctive promise of a new and foundational technology paradigm, like the Web in the 1990s and mobile in the 2000s, that really does have the potential to sweep like a windstorm over economy and society. The rate of change demands a capacity for systematic foresight, that helps decision-makers distinguish the otherwise fuzzy boundary between profound innovation and overblown hype.
One thing about today’s intersecting vectors around AI is nearly certain. The pendulum has decidedly swung away from permissive innovation, the idea that researchers and product developers should be allowed to pursue almost any line of work until and unless someone proves there is a danger to society. It was the defining ethos for tech in the 1990s; there was widespread bipartisan consensus that the government should adopt a laissez-faire approach to regulating the industry.
Early 2023 sees the pendulum swinging toward something more like the precautionary principle, where you have to positively prove that what you are doing is safe before you start. “Responsible innovation” was a catch-phrase that sought a workable middle ground, but it’s been challenging to put into practice in a manner that firms, society, regulators, and investors can agree upon.
Finding common ground for responsible innovation will require a great deal of education, patience, thoughtful foresight, and collaboration. And it’s a quest that will be undertaken amid a heated environment, only to be made even hotter by world powers competing for economic and military predominance that generative AI is likely to power.
To get it right (or even close)? That would be a generation’s bet.
Reach out to Steven Weber if you’d like to hear more, or chart your course ahead.
#foresight #insight #strategy #execution #openai #chatgpt #dalle #articialintelligence #innovation