Slowdown in application workloads ahead
As AI agents automate more steps in everyday workflows, less of that work needs to run inside large application suites. That points to slower growth in data-center demand for enterprise resource planning, customer relationship management, human capital management and supply-chain management software. Reasoning-model agents and deep research tools can now autonomously browse the web, pull sources and run analyses on their own — tasks that previously lived in those apps’ user interfaces.
Engineering software — computer-aided design and computer-aided manufacturing — may skirt these headwinds, as simulation and synthetic-data creation keep workloads anchored in specialized tools.
Coding agents supercharge testing workloads
AI coding agents — assistants inside developer tools that suggest, write and fix code — should give a big boost to application development and testing workloads. Agents from Cursor, Anthropic’s Claude Code, GitHub Copilot, OpenAI’s Codex and Gemini Code Assist handle tasks like debugging and appending to existing code. Companies report 30-40% productivity gains on new code written with these agents, which should channel more development and testing to AI data centers. Prompt-based code generation is quickly becoming one of the most-used generative-AI features in existing business applications.
Content delivery, cybersecurity also benefit
As autonomous AI agents plug into business workflows, more mission-critical tasks will run in AI data centers. The rise of reasoning models like OpenAI’s o3 shifts the focus to ensuring that infrastructure is fast, efficient and reliable from simply having a model. That’s a tailwind for content delivery networks (CDNs) from companies like Cloudflare and cybersecurity providers such as Zscaler. Most companies seek to integrate internal knowledge databases and documentation with LLMs while relying on CDN and cybersecurity vendors to manage token consumption for LLM fine-tuning and inferencing.
