Why should we prioritize the sustainable development of artificial intelligence?


Sustainable AI must become the ultimate goal for everyone in the industry.

Image source:Getty Images/iStockphoto




Raquel Urtasun

Waabi Founder and CEO



  • Artificial intelligence will become the most important technology of this century and beyond.

  • In recent years, the scale of AI models has expanded rapidly. While each new generation of models has made advancements in certain areas, development costs have also soared, making this model increasingly unsustainable.

  • The brightest minds from industry, academia, the venture capital world, and government must refocus their attention and resources on developing more sustainable AI that benefits all of humanity.


Artificial intelligence is set to become the most transformative technology of this century and beyond. AI will reshape every aspect of our lives and empower us to tackle some of the world’s most pressing challenges—such as climate change, road safety, and even cancer. Over the past few years, driven by the surge in generative AI, we’ve already begun witnessing this potential gradually turning into reality.

However, I’m concerned about the direction the industry is heading—and its impact on future generations and the planet. While modern AI boasts remarkable capabilities, the primary approach to developing this technology still relies heavily on "brute force." Achieving greater performance requires larger models, which in turn demand more data, computing resources, and energy. Yet this path has already given rise to complex challenges and growing inequalities.

If we continue down this path, the unsustainable cycle will persist until resources are depleted. We must shift gears and prioritize the development of sustainable AI—unlocking its full potential while ensuring a fair future for this transformative technology.

The High Cost of the "More Is Better" Model

Over the past few years, the scale of AI models has expanded rapidly. While each new generation of models has made advancements in certain areas, development costs have also soared. Reportedly, in 2024, OpenAI will spend up to $3 billion on training ChatGPT and its updated models, while the servers required to run inference tasks will cost an additional $4 billion. To stay competitive in the AI race, companies like Microsoft, Meta, Amazon, and Google have significantly ramped up their capital expenditures, which are expected to exceed $200 billion in 2024.

Computing costs continue to rise, and energy consumption is steadily increasing as well. It’s estimated that training a single large language model (LLM) consumes as much electricity in one session as 130 U.S. households use in an entire year. As models grow even larger, this figure is expected to climb further. According to the International Energy Agency (IEA), by 2026, data center electricity demand will double, soaring to between 650 and 1,050 terawatt-hours. The IEA warns that, at these levels, it could potentially add the equivalent electricity consumption of an entire country like Germany.

This unsustainable path is also generating broader societal impacts. AI’s “brute-force” development model has created barriers to entry, limiting opportunities for innovation in artificial intelligence for groups that lack substantial resources. We’re already witnessing how computing power is emerging as a new form of geopolitical capital, with wealthier nations vying for control over advanced chip manufacturing. This trend could lead to a world where only a select few hold the technological expertise—and reap the benefits—while others are left behind. Such a scenario risks deepening existing inequalities and stifling further innovation altogether.

We have an even better model.

Call for a Sustainable AI Revolution

Today’s neural network architectures have already demonstrated their success in scaling, leading the industry to ask: Why not simply stick with the same approach? However, as models grow closer to saturation, the cost of achieving further progress has skyrocketed. As a result, the AI scaling law—once predicted to ensure that "performance would continue improving as model size increased"—is now losing its validity.

We must act immediately, rather than waiting until we hit a bottleneck. The brightest minds from industry, academia, the venture capital community, and government must refocus their attention and resources on sustainable AI. This means not only developing new AI models with advanced reasoning capabilities that can generalize from fewer examples, but also fundamentally reshaping learning paradigms—and rethinking the very role data plays in the training process.

Next-generation AI model

Currently, mainstream AI models aren’t very efficient in training because they rely on pattern recognition rather than true understanding—their capabilities resemble the human brain’s “System 1” thinking. This concept was introduced by psychologist Daniel Kahneman to describe fast, intuitive, reactive, and unconscious cognitive processing that doesn’t involve deep reasoning. Humans depend on System 1 thinking for making rapid decisions; for instance, when a pedestrian suddenly steps into the path of a car, we instinctively slam on the brakes.Most AI models use this reasoning approach for all their decision-making processes.

When tackling more complex problems, humans rely on "System 2" thinking, which involves carefully considering and making deliberate, conscious decisions. In the example mentioned earlier, given more time, we would reflect on alternative actions and their potential consequences—ultimately opting for the safest course of action. Rather than slamming on the brakes instantly, we might choose to steer instead. This approach not only helps us avoid hitting a pedestrian but also prevents the car behind from rear-ending us. Over the past two decades, I’ve developed multiple generations of AI models capable of "thinking" in a similar way. Like humans, these models can generalize effectively from fewer examples and adapt more efficiently to new situations. As a result, this methodology enables the creation of models that are not only safer and higher-performing but also more sustainable.

A New Learning Paradigm

Traditional AI training methods rely on massive static datasets and "brute-force" learning, where AI acquires knowledge by repeatedly cycling through examples in the dataset. This is an overly simplistic and inefficient approach to learning.

We need to shift our learning paradigm to reflect the subtle, dynamic nature of human learning. The focus shouldn’t be on quantity, but rather on high-quality, high-information data that continuously evolves as AI skills and learning capabilities mature. For instance, in first grade, we might start with simple addition, but by sixth grade, teachers will advance the curriculum to include algebra and geometry. In practice, this means equipping AI—with each learning sample—by providing it with the maximum possible amount of informative data throughout the learning process.

Moreover, just as teachers provide rich feedback during the learning process, we must equip AI with vast amounts of supervised data—feedback that goes beyond mere corrections, delving instead into a thorough analysis of AI’s mistakes, identifying areas where its reasoning can be refined, and offering insights on how to optimize the quality of its outputs.

Finally, in the new learning paradigm, we must enable AI to take an active role in the tasks it attempts to tackle—just as humans do. When learning to drive, for instance, we don’t simply watch others’ driving videos and then try to mimic their actions on the road; instead, we get behind the wheel ourselves and practice firsthand. This “closed-loop” learning process allows AI to receive feedback immediately after producing an output, fostering a deeper understanding of its internal reasoning mechanisms—and ultimately leading to more efficient and effective learning. Reinforcement learning algorithms aim to achieve precisely this, but so far, they’ve faced a major hurdle: their insatiable demand for data makes it challenging to deliver sustainable, real-world solutions.

By focusing on two innovative areas—developing AI models with genuine understanding capabilities and reimagining learning paradigms—we can usher in a new era of AI.

Charting a New Course for AI

We are at a critical juncture. How we chart the path forward is absolutely essential. The decisions we make today regarding resource allocation and strategic direction will shape our future. To foster genuine innovation and steer clear of unintended consequences, we must prioritize responsible development above all else.

For the sake of our world and future generations, it’s time to prioritize efficiency and creativity—rather than blindly relying on brute-force scaling of today’s models. This is the only way to unlock AI’s true potential and ensure that AI technology benefits all of humanity.

The above content solely represents the author's personal views.This article is translated from the World Economic Forum's Agenda blog; the Chinese version is for reference purposes only.Feel free to share this in your WeChat Moments; please leave a comment at the end of the post or on our official account if you’d like to republish.

Translated by: Di Chenjing | Edited by: Wang Can

The World Economic Forum is an independent and neutral platform dedicated to bringing together diverse perspectives to discuss critical global, regional, and industry-specific issues.

Follow us on Weibo, WeChat Video Accounts, Douyin, and Xiaohongshu!

"World Economic Forum"





Share this article