April 18, 2026

Personal Economic Consulting

Smart Investment, Bright Future

Year of circular investment and stress test for the industry (and investors)

Year of circular investment and stress test for the industry (and investors)

If you wanted a quick read on tech anxiety in 2025, you only had to watch AI stocks move. The sector behaved less like a market and more like a stress test for investor nerves.

A tech giant with a soaring valuation invested billions in a fashionable AI startup. The startup spent the cash on hardware. The hardware maker’s stock jumped. Everyone looked richer. The loop continued.

Eventually, investors began to question whether this was momentum or a warning sign. Michael Burry and Peter Thiel added fuel to the doubts. Both signalled that valuations were drifting into fantasy territory by selling holdings or taking short positions against companies inflated by the AI boom. They were not rejecting AI’s potential. They were rejecting the price tags.

Alongside the money churn, agentic AI became the industry’s new organising principle. Companies shifted from generative tools that answered prompts to systems that could reason, plan and run multi-step tasks on their own.

Enterprises placed these agents into supply chains, HR functions and software development. Major cloud providers shipped platforms designed for multi-agent coordination, promising faster decisions and fewer humans in the loop. Competition intensified as labs focused on reasoning benchmarks and rapid-fire model releases.

Everyone wanted a system that could think, not just a system that could complete a sentence.

Efficiency became another front. New optical processors performed computations at the speed of light. Quantum computing research made gains that hinted at a shake-up in cybersecurity.

The cost of training and running large AI models fell sharply as hardware, cooling and specialised chips improved. Startups advertised training runs that no longer required the budget of a small country. Investors began asking whether the era of GPU-driven scarcity was already ending.

Spending elsewhere continued at maximum volume. OpenAI and SoftBank announced a plan to build supercomputers with a price tag of half a trillion dollars.

Amazon committed another $50 billion to government-focused AI infrastructure. Companies fought for talent with the intensity of elite sports recruitment. Salaries in the millions became routine. Meta reportedly offered nine-figure packages as it tried to stay competitive.

China disrupted the self-congratulation. DeepSeek claimed to have built a reasoning model that matched leading systems in the United States at a fraction of the expected cost. Whether or not the figure was realistic, it sent a clear signal. Scale alone would not guarantee dominance.

Meanwhile, data became a war zone. Reddit challenged several AI firms over scraping. Disney accused Midjourney of systematic copyright infringement. The entertainment giant’s entry into the conflict marked a turning point. The companies that once treated the web as free fuel now faced real consequences.

The politics sharpened. The US Copyright Office issued a report that supported copyright holders. Its director was dismissed the next day. The timing raised questions about pressure from powerful AI interests. Large AI companies soon began signing licensing agreements with news organisations. It was a belated attempt at legitimacy after years of unchecked data collection.

OpenAI added its own drama. It attempted to convert fully into a for-profit corporation, then reversed course after public backlash. The company adopted a Public Benefit Corporation structure. Critics said the new framework still left too much authority in the hands of investors.

The technology itself kept exposing its faults. Zillow’s pricing model misread the housing market and produced losses on a scale that forced a strategic reset. Deloitte corrected government reports in Canada and Australia after fabricated citations were discovered. Australian regulators sued Microsoft for bundling AI tools into subscriptions without clear alternatives.

Some failures crossed into darker territory. A prompt adjustment to Grok produced antisemitic content that shocked even seasoned practitioners. A separate flaw made hundreds of thousands of private user conversations visible to public search engines. Trust, already fragile, hit a new low.

Deepfake fraud reached practical maturity. A retired doctor lost his savings after believing a fabricated video of a finance minister promoting an investment scheme. Criminals no longer needed technical sophistication. They only needed a convincing clip and a target who still believed what they saw.

A new voice entered the debate. Pope Leo XIV used his first major address to warn that AI posed challenges to human dignity and labour. His intervention reframed the conversation. AI was no longer only a commercial or political puzzle. It became a moral one.

The lesson from 2025 is clear. Building large models is straightforward. Building reliable intelligence is not. Money and compute do not solve judgment. For every capability that impresses, another misfires in a way that damages trust or creates risk.

The machines are not taking over. They are still making obvious mistakes in plain sight. And society is finally beginning to understand what it means to live with tools that are powerful, imperfect and impossible to ignore.

If this was the warm-up, the real test is coming.

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.