What are we optimizing for? AGI alignment and our varieties of capitalism
In 2001, political economists Peter Hall and David Soskice published their influential book Varieties of Capitalism, which attempted to analyze and classify the different ways economies are organized. They argued a nation’s institutions encourage its companies to pursue particular corporate strategies, and thus different ways of organizing an economy generate varying capacities for production and innovation alongside different distributions in income and employment.
For example, liberal market economies including the US and the UK are characterized by a high degree of reliance on market distributional mechanisms, strong competition between firms, and prowess in radical innovation. Conversely, coordinated market economies, like those in Scandinavia and Northern Europe, are characterized by a pronounced distributional role for the State, more cooperation between firms, with strength in incremental innovation.
These patterns identified by Hall and Soskice more than 20 years ago are still visible today as the age of AI dawns, and the questions raised by this political economy perspective on AI are more important than ever. The US remains the world’s leader in artificial intelligence with OpenAI at the global technological frontier, and the UK is close behind with its own advanced AI labs. According to Hall and Soskice, the creation of highly innovative firms is not random, but a reflection of the way these societies are organized; the natural outgrowth of an institutional environment and knowledge ecosystem that encourages radical innovation at the fastest possible pace.
And the speed is astonishing; Nvidia’s Jensen Huang argues that artificial general intelligence (AGI) is just 5 years away, while recent news reports suggest the latest models from Open AI are making significant strides toward AGI by displaying mathematical reasoning capabilities. Which raises some fundamental questions, including what set of economic designs and socio-political institutions better prepares us for the risks created by machine superintelligence? And what role can enterprises play to build social, economic, and political resilience to such a profound technological and ideological shock?
Paperclips, production, and profits
Stanford philosopher Nick Bostrom provided the benchmark fable of the dangers of AGI when he explored the question, how can humans retain control of a super intelligent machine that is many times smarter than us? He tells the story of an AI programmed with the sole goal of producing paperclips. The AI learns and innovates to get better and better at producing paperclips, and so develops strategies to secure the resources required to produce more and more of them. At some stage, humanity will have enough paperclips to satisfy demand, and will therefore seek to switch the AI off. But having learned so much that it is now super intelligent, the AGI is able to circumvent our commands, use deception and coercion to surreptitiously pursue its goals, and might view humanity as a threat to achieving its objectives. It may therefore decide to eliminate us altogether, and we will be helpless in the face of its supreme power and intelligence.
Notice how the goal of the AGI in this story is to maximize production, and in doing so has adopted capitalistic values that shape and define its objectives and strategies. As Hall and Soskice argued, this is not random either. Given the tremendous resources required (and the high risks entailed) to build such systems of super intelligence, it is natural they arise from highly capitalistic societies. These societies are organized first and foremost around growing profits, which incentivizes selfish actors to pursue maximizing strategies, sometimes at the cost of harms to other stakeholders in society.
Recommended by LinkedIn
But does a super intelligent machine need the same set of economic incentives as we do?
Is profit maximization an appropriate objective for an AGI?
Or do we all need to reconsider our purpose and what we are optimizing for?
The word economics, of course, is derived from the Greek word “oikonomia” meaning “household” and “management”. Perhaps the arrival of AGI means our societies need to consider organizing around a new Eudamonics principle, derived from the Greek word “eudaimonia” for “welfare”.
Such an overriding social objective might help teach a superintelligent machine to protect humanity rather than destroy it; to use its phenomenal power to engender human health, prosperity, and wellbeing. But to do so we need to start setting a better example for these machines that learn from us, and soon.
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.
Transforming Customer Experiences | Driving AI & Digital Innovation | Inspiring Leadership | I embrace the power of authenticity, courage and building trust. Co-Author CVM BoK (CVM Box of Knowledge)
1yHow inspiring Beatriz Sanz Saiz !! I can’t wait to keep hearing more of these questions and to continuing the dialogue on AGI to find answers globally!!
Managing Partner at Brandpie | Independent Strategic Brand Consultancy
1yGreat questions and fascinating piece
Strategic Insights Leader | Strategy Development • Brand Building • Customer Centricity | Driving Competitive Advantage with Unique Insights in Rapidly Evolving Markets
1yBeatriz Sanz Saiz I love this piece - in particular how you brought together different ideas to explain the power and importance of your simple initial question. I love the idea of Eudamonics.