We Will Need All Those Data Centers (Part 2)

We Will Need All Those Data Centers (Part 2)

The previous article in this two-part series reviewed tech history to show that some investments make sense for society, even when companies move too fast and show «irrational exuberance.» These investors and companies may suffer and even cause a recession, but the technology remains critical and offers value to later companies as well as the public. So let’s see what’s reasonable to say about the future of data centers.

What Guesses Can We Make About the Future of Data Centers?

In the 1990s, policy-makers in government, business, and journalism came to realize what those of us working on internet technologies had expected for some time: the internet would change everything. And we were correct.

Artificial Intelligence is the same. Many knowledge workers across industries boast about improved productivity through AI, and many more use AI secretly. It’s crude, clunky, and downright scary now, but it has already demonstrated value in several areas such as energy comsumption and retail. Machine learning (now something of a legacy practice) is fairly well understood and can be kept under human control. LLMs are still a rogue technology and I don’t feel that «large world models» are enough of an innovation to fix it. So it’s up to the computer field to either bring the LLMs to bay or replace them with something smarter.

We saw earlier that it took decades for the internet and its optical fiber underpinnings to become commercially successful. AI is vaguer and less tangible, so its path to success is even steeper. That is why the companies investing in the chips and data centers will experience massive losses along the way.

Institutional managers express great confidence in the future of AI, but they’re deploying it in prototypes or in strictly circumscribed applications. They’re not refashioning their whole organizations around AI the way they did around computers in the 1980s and later the internet. They’re not applying AI to any area of risk—and they’re not going to, as long as LLMs are so unpredictable. When errors in AI lead to people being jailed, it might be time to reassess its deployments.

Transparency, accountability, explainability—whatever term you use for uncovering the ways AI works, are great things. But how much will corporate productivity improve if your staff have to recheck everything output by AI?

Even so, I’m bullish on data centers because I believe that software engineers will find robust and game-changing applications of AI, and will ensure that they are reliable. AI will not be consulted in a slapdash manner by indifferent bureaucrats (possibly coping with overwork created by staff cutbacks) but will be used with proper training to unlock demonstrated value.

(I won’t even comment on the highly publicized use of AI for mental health support and lifestyle advice. I don’t judge people in need for their choices of tools to help them.)

Will AI technologies require the mind-boggling level of data collection and the expensive, specialized processors that are currently inspiring companies to build data centers? Although DeepSeek offered an intriguing peek into cheaper ways to accomplish generative AI, its results proved disappointing.

I’m going to lay a bet: because the AI revolution (starting with machine learning) always benefitted from more and more data, AI will continue to require this level of massive processing.

If size continues to matter, AI will also continue to raise the sociopolitical questions that so many people have been debating: Where does data come from, and will the sources of data be compensated? What are the privacy implications of all this data collection? Who gets to train the models, and what biases will the trainers inject? Will AI create a «data elite,» allow powerful institutions to extend their power even farther, and devalue those who work with their hands? Should governments regulate AI—and can anybody really do so? What are the responsibilities of each side in the emerging public-private partnerships, and will such partnerships bring more public trust to AI?

As with fiber, I believe companies are overbuilding in a race to outflank their competition, whether that competition is with other companies or between entire nations. And because they’re in such a rush, they are not building sustainably. I don’t know whether governments can effectively regulate AI itself, but they can stop data centers from using ruinous amounts of electricity and water.

Observers also ask whether it makes sense to fill new centers with thousands of chips when the pace of innovation in processors is so fast. The prospect of wasted chips doesn’t eliminate the future value of data centers; it just exhausts the value of the investments faster.

Impacts on Education and Employment

The long-term effect of AI on the economy and on employment are beyond the scope of this article—in fact, beyond my capacity and perhaps anybody’s capacity to predict. But I can say some things about computer administrators and programmers.

The size of data centers places them in the upper range of needs for automation. Skills that used to be the province of a small set of operations staff—skills with various names such as DevOps, continuous integration, and continuous deployment—will become more and more crucial. To move ahead in the field of operations, it will behoove professionals to make configuration tools and container technologies your everyday companions: Kubernetes, QEMU, Ansible, etc., or whatever tools emerge to replace them.

Programmers also have a new role: to use AI responsibly, and to train their organizations to do so.

Recently I talked to a new graduate in computer science. I asked him what the field was like, wondering how it had evolved over the past few decades. He said, «Everything seems to be moving to AI.»

«That’s good,» I said. «You can learn how the technologies really work, where they’re useful, and where the risks lie—you can help any company you work for use AI properly.»

He shrugged and said, «It’s all pretty much a black box.»

The reply immediately came to my mind: «What use are you, then?»

Of course, I didn’t say that out loud, but the conversation made me worry. Bridges hold up (usually) under heavy loads and skyscrapers stay up after earthquakes because their engineers intimately understand their requirements. Software engineers have to think the same way—or the AI nightmares of 2001: A Space Odyssey, the Matrix, and the Terminator series might actually be our future.

<< Read the first part of this article

About Andrew Oram:

Andy is a writer and editor in the computer field. His editorial projects at O'Reilly Media ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. Andy also writes often on health IT, on policy issues related to the Internet, and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, USTPC.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *