AI is everywhere – in earnings calls, boardrooms, conferences and dinner conversations. In Nvidia’s latest earnings call, the term “AI” was mentioned 114 times.
By Nic Andrew, managing executive: asset management at Nedbank and MD of Nedgroup Investments
Since I first encountered ChatGPT in late 2022, it has become increasingly clear to me that AI’s implications are not incremental, but structural.
The question is no longer whether AI will matter, but what kind of society it will shape.
From the outset, and particularly now, my emotions towards AI have been conflicted: excited and fascinated by its power and potential to solve real global issues (medicine, education, energy, productivity) and terrified by the potential negative impact on society – particularly on inequality and employment.
The numbers are staggering and difficult to comprehend – from the initial take-up of ChatGPT (less than two months to reach 100 million monthly active users), to the scale of capital expenditure by large technology companies (Apple, Microsoft, Meta and Alphabet are expected to spend $600 billion on capex this year), to the impact made by Claude Code, which recently celebrated its first birthday and is “credited” with triggering “SaaSmargeddon,” which has wiped out more than $1.5 trillion of market cap from the software sector.
The International Monetary Fund estimates that roughly 40% of global jobs are exposed to AI-driven change (and up to 60% in advanced economies) – some replaced, many reshaped, and others complemented. McKinsey estimates that generative AI alone could unlock $2.6 trillion to $4.4 trillion of value each year — comparable to the output of major economies. Goldman Sachs has argued that workflow shifts could expose the equivalent of 300 million full-time jobs to some degree of automation.
These statistics are abstract until they aren’t. For me, they become personal when I think about my children, on the cusp of entering a labour market that is already being reshaped beneath their feet. Four years ago, I advised my daughter, who was studying Data Science then, that coding was an essential skill to have for the future. Not so much anymore.
AI is not a wave. It’s a rising tide.
A wave is something you ride. A tide is something that redraws the coastline – quietly, persistently, and everywhere at once. That is what artificial intelligence is becoming over the next few years: not a product category, but a general capability embedded into all sectors and ways we work – law, lending, medicine, education, consulting, logistics, marketing, and investing. Very few areas seem immune, and all are likely to look completely different by the end of the decade (and probably by the end of next year).
I read with sadness in the press of the passing of Clem Sunter, a business leader and one of South Africa’s most influential scenario planners. I have vivid memories of not only his engaging presentation style, which involved laughing very loudly at his own jokes (and even if you did not get it, you inevitably laughed at his response!), but of his apartheid-era South African scenario planning – clearly articulating the choices society had to make: the high road of a negotiated and peaceful settlement and the low road of conflict and becoming a pariah state. Clem must have presented it to tens of thousands of students, businesspeople and public officials, and certainly played a role in the country’s peaceful democratic transition.
So, it feels appropriate to reflect on how the world now faces some similar choices and scenarios with distinct high road and low road outcomes.
The fork in the road
Over the next decade, two forces will shape the dominant AI future:
Firstly, who gets the capability? Will powerful AI be concentrated in a few firms and countries, or broadly accessible to many organisations and individuals?
And secondly, can we trust the capability? Will governance keep pace – audits, accountability, transparency – or will oversight be weak and chaotic?
Those two forces create four potential futures. The point of scenarios is not prediction; it is preparation. But scenarios also do something else: they expose uncomfortable truths. Because AI is not just a technology. It is a multiplier – of competence, of inequality, of fraud, of institutional trust, of national advantage.
Scenario 1: Jetpacks with Seatbelts (Broad AI + Strong Governance)
This is the “high road” future: AI spreads widely, and society builds guardrails that work. Everyone gets leverage, and benefits from the receipts. Many of the world’s most pressing challenges are addressed.
The signature shift is job redesign, not job destruction. Tasks are automated and humans move up the value chain. The best examples are mundane, and therefore powerful: in education, every learner gets low-cost personal tutoring while teachers become coaches; in small business, a one-person company starts operating like a 10-person team and can innovate at speed; in healthcare, AI reduces administrative load and improves triage documentation.
What makes this scenario durable is trust infrastructure. AI systems that really matter – such as those affecting credit, hiring, healthcare, public services – are logged, tested, audited. There is human override. There are challenges, appeals and transparency. This will provide legitimacy.
The mood here is not naïve optimism. It is practical confidence: the sense that institutions are adapting fast enough to keep benefits broad and uplift society at large.
Scenario 2: The Licensed Pilots (Concentrated AI + Strong Governance)
This is the second “high road” but with a sharper edge. Governance is strong, and capability is concentrated. Safe skies. Expensive tickets.
AI becomes like aviation: heavily regulated, audited, and dependable – but dominated by large organisations and countries that can afford compute, data, and compliance. Trust becomes a moat. The systems are safe, but access is unequal. Smaller firms “rent intelligence” from major platforms. Smaller countries depend on a few AI suppliers, and strategic autonomy becomes harder.
This future can still deliver growth and stability, but it carries significant political risk: when people believe the process is fair, yet the outcomes are consistently unequal, resentment accumulates slowly, and can easily explode all at once.
Scenario 3: Walled-Garden Wealth (Concentrated AI + Weak Governance)
This is the inequality trap: AI works brilliantly, but value concentrates. Productivity rises but opportunity does not.
The most dangerous feature is not mass unemployment. It is the broken first rung. Entry-level roles in law, finance, marketing, software, and administration have historically been training grounds – where juniors do the “grunt work” that becomes the foundation for judgement and progression later. If AI absorbs those tasks without creating new on-ramps, the ladder loses its bottom rung.
This is not theoretical. Signals are emerging in the real world, including early evidence about AI’s impact on graduate job prospects. Because AI exposure is often higher for high-skill workers, disruption does not politely stay “at the bottom.” In a Walled-Garden world, politics becomes brittle because every election turns into a referendum on fairness.
Scenario 4: The Deepfake Discount (Broad AI + Weak Governance)
This is the trust-collapse future. Capability spreads widely, and misuse scales faster than institutions can respond. Reality gets a price tag.
In this world, scams are personalised, deepfakes are routine, and misinformation becomes industrial. The result is not only financial loss. It is social corrosion. When it becomes cheap to fake evidence, verification becomes costly. Transactions require extra checks. Organisations build fraud buffers. Society pays a “verification tax” in time, money, and friction.
Communities then adapt in the most human way possible: they retreat into closed networks – “I only trust my people.” That is how trust collapses: not in one spectacular event, but in millions of small moments where confidence is withdrawn.
The signposts that tell you which scenario is winning
If you want to see the future arriving early, track these four signals:
- Are reskilling programmes producing real mobility?
- Is the first rung reappearing, or disappearing?
- Is AI capability diffusing or concentrating?
- Is trust infrastructure mandatory or optional?
AI will not automatically create a better society. It will create a more capable society.
Whether that capability becomes a lift or a wedge depends on choices made now: by regulators who decide whether accountability is real, by firms that decide whether job redesign is serious, and by education systems that decide whether the next generation gets on-ramps instead of dead ends.
The high road is available. But it is not