|
Insight Report: Key Findings (9)
Global Risks In-Depth: AI at Large In the Global Risks Report 2024, we explored the risks of AI, focusing on market concentration and its effect on AI development, inequality between owners of AI technologies and those who are not, and on the use of AI in geopolitical and military conflict. With rapid developments in AI over the last two years, we revisit the risks generated by a world in which AI use is ubiquitous across systems and economies. AI has shifted from a frontier technology to a systemic force shaping economies, societies and security. The global market size for AI is projected to rise from an estimated $280 billion in 2024 to $3.5 trillion by 2033. Adverse outcomes of AI technologies is ranked in the Global Risks Perception Survey 2025- 2026 (GRPS) as among the most consequential long-term global risks and the one with the largest upward shift across all 33 risks surveyed, from #30 in the two-year outlook to #5 over the 10-year horizon. Over time, the diffusion of generative and agentic AI systems has the potential to transform economies and, while the opportunities and benefits are vast, there are also risks that could manifest rapidly due to market forces, geopolitical pressures and slow development of governance frameworks. Both the opportunities and risks associated with AI will be unevenly distributed. Access to AI infrastructure as well as to electricity, internet access and data storage will amplify economic power shifts between countries over the next decade as AI's productivity benefits bypass some populations entirely- albeit protecting them from some of the risks. For example, AI adoption in North America (27% of the working-age population) is triple that in Sub-Saharan Africa (9%). Only a handful of AI data centres are in developing regions, with the United States, Europe and Eastern Asia dominating capacity. Within countries, the gap between AI-integrated geographies and excluded peripheries may also drive localized power shifts, create internal migration pressures and destabilize national cohesion. This section explores three sets of risks. First, the widely cited concerns around the impact on labour markets could lead to deepening societal polarization if unemployment rises and workers struggle to adapt to new tasks and roles. In such a scenario, both higher productivity and higher unemployment could unfold simultaneously. Second, as more tasks become undertaken by AI and previously applied human skills begin to atrophy, it is unclear if the path forward will be a golden age for creativity, leisure and learning – or, conversely, a drift into purposelessness, apathy and societal decay. In an extreme scenario, control over many aspects of society could be ceded to AI. Third, with militaries’ reliance on AI systems continuing to increase, the potential for misuse or mistakes will rise, too, placing human lives directly at risk. What distinguishes AI-driven disruption from previous technological transitions is the potential for cascading failures across interconnected domains. Labour displacement ripples widely, into households, communities and political systems. Lack of economic opportunity or unemployment (ranked #14 in the GRPS 10-year ranking) can drive extremism; institutional distrust is interlinked with misinformation and disinformation; and surveillance empowers authoritarian responses to the instability that AI creates. Once established, these loops could become self-reinforcing. Concerns are visible in country-level business sentiment across the two-year time horizon, according to the Executive Opinion Survey 2025 (EOS). Three countries rank Adverse outcomes of AI technologies as their single most important national risk and 20 countries place it within their top five. Regional and income-group averages show a similar pattern, with the risk ranking as high as #4 in South-Eastern Asia. Jobless productivity Within a decade, AI and automation could displace human labour in many roles, disrupting labour markets on a historic scale. Estimates of labour- market impacts vary widely. One estimate notes that 86% of companies worldwide expect AI to transform their business models by 2030, rising to 97% in finance and 99% in information technology, but that the labour market impact will be positive on balance, with 170 million new roles set to be created and 92 million displaced, resulting in a net increase of 78 million jobs globally by 2030. A more negative view suggests that AI could eliminate up to 50% of entry-level, white-collar jobs within the next five years in the United States, potentially driving unemployment to 10–20%. In a negative scenario for labour markets, forces, unchecked by governance due to geopolitical competition, will accelerate the propensity to automate and replace human labour as much as possible compared to approaches to augment human tasks and skills. While new roles and tasks may emerge and offset losses, these could unfold in a much longer timeline than job displacement, like in previous major technological shifts. In such a scenario, the gains from AI will accrue mainly to highly skilled, high-productivity digital workers, while opportunities will contract faster for low-productivity workers who do not build relevant skills. Those jobs that still exist for the latter group would offer relatively depressed wages. When displacement reaches populations such as the managerial and professional classes – with political voice, media access, and higher expectations of security – the political consequences could intensify. A “white-collar rust belt” could begin to take hold in cities that today are hubs for knowledge and services, generating a powerful, angry, political force. The impacts of labour-market disruption will be vast, affecting households, communities and political systems, with consequences that may prove even harder to reverse than the economic dislocations themselves. Political gridlock could worsen as societies become more polarized under economic duress. Some countries could enter a vicious cycle of economic contraction and social discontent, as AI-driven productivity gains co-exist with widespread disruption and profound inequality. A generation of university graduates may need to work gig-economy jobs as they struggle to keep pace with relentlessly improving AI capabilities. If highly educated young people remain unemployed for long periods, this could become a destabilizing force in society, with some potentially becoming more inclined towards antisocial extremism. The GRPS finds Inequality to be the most interconnected risk for the second year in a row, reflecting its role as a transmission mechanism: labour displacement feeds inequality, which drives societal polarization. Even if there are massive productivity gains from implementing AI, as more of the middle class is hollowed out and the pathways to social mobility rapidly dissipate, incomes would decline and consumer confidence would erode, depressing spending and potentially triggering an economic downturn. Policy-makers are likely to have fewer options as the next decade progresses: high public-debt servicing costs will constrain fiscal responses, with rising middle-class unemployment negatively affecting the tax base and housing markets. Advanced economies may face the kind of permanently K-shaped economies prevalent in many highly unequal developing economies. If AI systems continue to improve and exhibit more forms of autonomy, reasoning, and adaptability that extend beyond human-programmed constraints, achieving or approaching general intelligence, the implications for labour markets and economies could become more profound. Entire categories of cognitive and creative work could face automation. At that stage, disruption might no longer unfold linearly but exponentially, possibly compressing adaptation timelines – for aligning education, reskilling, and social protections to the new technology environment – to months rather than years. The gains from implementing AI would be concentrated in the hands of capital owners (individuals or organizations). Without new frameworks for taxation, redistribution and rapid reskilling, current inequalities would ossify into structural divisions between those who control intelligent infrastructure and those who depend on it. Purpose in drift In geographies and sectors where waves of automation restructure labour markets, a new class could emerge: workers defined not by job loss alone but by the erosion of professional identity and social belonging. If unaddressed, this crisis of occupational identity could drive alienation, social withdrawal or anti-government and anti-technology backlashes. Many governments may aim to put in place emergency measures to maintain social stability, ranging from income safety nets to training facilities and job centres to harnessing AI for learning and job-matching. While universal basic income (UBI) – or greater access to free services (universal basic services) – generated from the windfalls of AI are a best-case scenario for the unemployed, the question of purpose, identity and meaning remains an open one. A society where large segments, especially young people, subsist on UBI could experience a crisis of meaning. Unemployment has been found to be associated with a heightened, low-to-moderate risk of increased mental health issues (compared with being employed) - including depression, anxiety and psychological distress - even in societies with welfare states. Conversely, re-employment reduces the risk of these mental health issues. Prolonged, mass unemployment might result in a “lost generation” that feels it has no role to play in contributing to society. Going further, AI threatens something more intangible yet fundamental: the value of being human. As cognitive tasks, creative work and even social interaction get automated, it is unclear what remains distinctively human. In education systems that are already long outdated, the integration of AI without other adaptations may erode the development of critical thinking. AI companions may reduce rather than enhance collaboration and increase loneliness and a range of mental health issues. There is also the risk of overdependency on AI as we start leveraging it as our “second brain.” Some researchers are more provocative, anticipating that as AI gets smarter, humans get dumber. There are second-degree physiological health impacts as well, deriving from the environmental impacts of generative AI models. These can consume up to 4,600 times more energy than traditional software. AI-related infrastructure can result in degraded air quality and pollution from manufacturing, electricity generation and e-waste disposal. In the United States alone, this could impose a public-health burden of over $20 billion annually by 2028. Health and well-being could in future also be affected by rising water insecurity in regions with significant data centre buildouts, as these require heavy water use for cooling. Compounding these economic and psychological stresses is the prospect of information chaos as Adverse outcomes of AI technologies undermine social cohesion. Today, realistic deepfakes and AI-generated Misinformation and disinformation are already flourishing; within a decade they could become ubiquitous, making it impossible for citizens to distinguish truth from deception. The result is a fragmented public sphere in which consensus on basic facts breaks down. In democracies, elections are contested on the authenticity of evidence itself; any scandal can be dismissed as a deepfake and any deepfake might be real. In autocratic systems, too, the consequences can be dramatic. As fear and conspiracy theories flourish, they can potentially incite violence. Communities might splinter along the lines of those who embrace technology versus those who reject it, further entrenching societal polarization. The ultimate threat to societies is a loss of control to AI systems. Even in the absence of exponential growth in AI capabilities, incremental improvements in capability could lead to a creeping, structural shift of power from humans to AI over the next decade. As ever more capable AI agents, robotic systems and automated infrastructures assume functions once performed by humans, the balance of agency tilts. Incremental AI advances could steadily erode human influence over the economy, culture, governance and societal systems. The more that AI agents themselves are used in R&D to develop AI agents further, the greater the risk that the technology companies managing them could cease to understand how those AI systems work. Such R&D automation could accelerate the timeline for progress in AI, making it even more difficult for humans to build the technical and regulatory capabilities to keep pace. Military misuse or mistakes Following Russia’s invasion of Ukraine, both sides in the conflict have pushed forward the boundaries of AI use in military conflict. AI technologies have played important roles in geospatial intelligence, autonomous systems, and cyber warfare, among other areas. As militaries embed AI deeper into intelligence, surveillance, logistics, and command functions, the risk landscape will shift from tactical to systemic. AI will increasingly influence how militaries perceive threats, make decisions, and take actions. AI system failures could propagate through entire chains of command and deterrence systems. Without humans firmly in the loop, AI-powered platforms may misidentify threats,respond to biased data, or behave unpredictably in conditions outside their training parameters. Adversaries might use data poisoning – introducing corrupted data during model training – as a covert weapon to undermine military AI systems. When humans are in the loop, an additional set of risks needs to be considered. Weaponized generative AI models can instantly fabricate executive orders or create synthetic, convincing battlefield footage, potentially confusing both humans and technology-based responses. Human decision-making is influenced by cognitive biases, such as confirmation bias or recency bias, when interpreting AI outputs. This can become especially challenging in conflict conditions, when it might also be tempting to over-rely on AI systems even if these are not yet fully equipped to provide nuanced decision-making support. The speed at which AI systems operate, when applied without checks and balances, can itself be a source of risk. Military crises that once unfolded over days or hours could instead escalate in seconds. An automated early-warning system misinterpreting a missile test, for instance, could trigger defensive responses from an adversary's AI system, leading to a conflict started by technical error rather than strategic intent. Traditional deterrence, built on human deliberation and diplomacy channels, may not hold when algorithms initiate actions before leaders can act. With countries starting to implement AI tools for managing nuclear weapons stockpiles and in some areas of nuclear weapons command, control, and communications, addressing such risks becomes especially critical. However, major powers are rushing to integrate AI across military domains, each fearing strategic disadvantage if rivals move first. This dynamic incentivizes rapid deployment over rigorous safety testing, increasing the probability of failures precisely where consequences are most severe. The intense pace of innovation makes it unlikely that sufficient international norms or verification mechanisms will be established in time. Each country's pursuit of security may, collectively, produce a more dangerous world. Beyond state actors, the democratization of AI capabilities raises the spectre of asymmetric security threats. Advanced AI tools could accelerate the development of novel weapons faster than governance frameworks can adapt. Even small groups may eventually wield destructive capacities once reserved for superpowers, leveraging AI to design bioweapons, conduct infrastructure attacks or manufacture disinformation at scale. These risks will be heightened in countries in which the dividing line is blurred between well-resourced national militaries and criminal groups with intentions to cause extreme harms. Corrupt practices and a declining rule of law could contribute to more frequent illicit sharing of sensitive information, technologies or weaponry. Militaries may then both use AI-powered autonomous technology to deflect human responsibility in warfare and in parallel shift that responsibility towards loosely associated non-state actors. These dangerous trajectories could lead to a world in which the very sides in warfare become difficult to identify, with plausible deniability becoming the norm. Actions for today To build a resilient workforce, governments and businesses should be proactive in planning ahead, and treat skills development and job transition planning as core elements of AI deployment. This includes funding scalable reskilling infrastructure, incentivizing job creation in emerging sectors, and targeting support for high-risk groups such as youth, people in routine service and administration roles, and older workers. If the negative impacts of AI on labour markets accelerate, each year of policy inaction increases the adaptation gap between technology and the workforce, raising the costs of correction. To stay ahead of the curve, governments should also strengthen their monitoring of labour- market, social, and geopolitical risks, similar to monitoring financial markets for systemic exposure. This includes tracking job churn, trust indicators and political volatility, including using tools such as scenario planning. Beyond workforce considerations, the social contract between citizens and governments will itself also require renewal to be fit for the era of AI. Investing in public digital infrastructure and ensuring linguistic, geographic and socioeconomic inclusivity in AI design and access is essential to avoid the emergence of a globally marginalized AI underclass. Public awareness and education will be central to rebuilding the social contract and trust in an AI-transformed economy over the next decade. It will also help to mitigate the risks most closely associated with Adverse impacts of AI technologies, which include Misinformation and disinformation and Cyber insecurity. In parallel, societies must prepare for extended support to those most impacted by technological unemployment, exploring adaptive models of social protection and investing in the civic, psychological and cultural infrastructure needed to maintain purpose, meaning and participation in an AI-transformed economy. The long-term risks stemming from AI depend on choices made or avoided within the short to medium term. However, fragmentation of regulatory regimes is increasing the risk of a race to the bottom. Coordination on minimum safety, transparency and ethical deployment standards, particularly for military, biometric and large- scale decision-making systems, is needed - yet requires cooperation similar to that for nuclear or bioweapons safeguards. World Economic Forum, Geneva, Switzerland.
0 Comments
Leave a Reply. |
Archives
April 2026
Categories
All
The two most crucial questions in life: Who am I? Why am I here?
Adm James Stockdale Preamble Although our own circumstances may be uneventful, the daily news never fail to remind us that we live in a troubled world; at times fraught with unimaginable pain and suffering. Scripture encourages us to pray always in the Spirit, being watchful to this end with all perseverance and supplication especially for all believers everywhere (Eph 6:18). The Greek word 'agrupneo' is the origin of the phrase "being watchful" and it means to stay awake or be sleepless. It emphasises the need for spiritual vigilance and alertness. Let us be faithful in praying. |