Our thoughts: Latest news & thought leadership | CDS

The Agentic Internet: What senior leaders need to know now

Written by CDS Marketing | 106 April 2026

Five critical themes shaping cybersecurity, AI adoption and the future of the internet, and what your organisation should be doing about them.

The internet is changing, and it is changing fast. We recently attended Cloudflare Connect on Tour which focused on the Agentic Internet. We spent a day immersed in some of the most pressing conversations happening across cybersecurity, AI adoption, infrastructure resilience and digital sovereignty. The sessions brought together senior security leaders, developers, architects and technology executives, all grappling with a shared set of challenges.

This post sets out the key themes and what we believe they mean for organisations navigating this rapidly shifting landscape. Whether you are a CISO, a CTO, a developer or a senior leader with responsibility for technology risk, there is something here for you.

 

1. AI adoption is no longer optional. But doing it safely is the hard part.

One of the clearest messages from the day was this: the pressure to adopt AI is real, it is coming from boards and investors, and it is not going away. According to Gartner research cited during the event, 64% of CEOs are under pressure to accelerate AI adoption, and 97% of security executives feel urgency around it. Perhaps most starkly, the average cost of an AI security incident is already estimated at $4.5 million.

The risk of AI is here now. The question is no longer whether to adopt. It is how to do so without creating new attack surfaces or exposing sensitive data.

Organisations today tend to fall into one of four broad postures when it comes to AI:

  • Conservative and sceptical: cautious about adoption, focused on risk
  • Opportunistic and cautious: exploring AI selectively, building guardrails
  • Strategic and integrated: AI embedded across operations with governance frameworks
  • Experimental and agentic: building with AI agents at pace; still a small proportion of the market, but growing fast

Most organisations sit somewhere in the middle. The challenge is that the threat landscape does not wait for readiness. Adversaries are already using AI to accelerate attacks, and the gap between cautious adopters and exposed organisations is widening.

What this means for your organisation

Senior leaders need to be asking: do we have visibility into what AI tools our employees are actually using? Many organisations are surprised to discover the breadth of AI adoption already happening in the shadows, from productivity tools to AI coding assistants to generative chat applications carrying sensitive prompts.

Visibility is the foundation. Without it, you cannot apply the controls needed to protect sensitive data, enforce acceptable use policies, or respond to incidents. The critical questions to get on your agenda:

  • Which AI tools are approved, and which are being used without oversight?
  • What data is being fed into those tools, including by employees who may not realise the risk?
  • Do you have the controls to enforce different policies for different user groups, tools and risk profiles?
  • How are you governing AI agents, which operate autonomously and create an entirely new category of risk?

If you'd like to get a quick idea of your AI security threat landscape,take our AI risk assessment! 

 

2. The threat landscape has matured, and AI is supercharging attackers

The cybersecurity sessions painted a sobering picture. The attack surface is not just expanding. It is being actively weaponised by increasingly capable adversaries who are using AI to operate at a speed and scale that would have been impossible just a few years ago.

Several trends stand out as particularly concerning for 2026 and beyond:

SaaS environments are a major vulnerability

The proliferation of SaaS applications has created a vast and poorly monitored attack surface. In one case discussed at the event, a single compromised identity in a SaaS environment led to over 1,000 downstream environments being affected, enabled by the attacker's use of AI tools to move quickly and at scale. The ShinyHunters threat group, active in early 2025, targeted victims indiscriminately by industry, using enablement data to proliferate access across tenants.

Many organisations lack the logging infrastructure to even detect these breaches. With data spread across five, ten or more SaaS applications, the web of potential exposure is enormous, and most businesses have limited visibility into what is happening across that web.

State-sponsored actors are targeting critical infrastructure

Beyond financially motivated cybercrime, state-sponsored actors represent a different and arguably more serious category of threat. These groups are engaged in long-term, strategic positioning: compromising critical infrastructure not for immediate gain, but to establish footholds that can be activated during geopolitical events. Unlike opportunistic criminals looking for a quick win, these actors are patient and persistent. Telecommunications networks, government systems and energy infrastructure are among the targets.

The insider threat is evolving and getting harder to detect

Perhaps the most alarming theme from the threat landscape sessions was the industrialisation of insider threats, specifically the use of fake identities to infiltrate organisations as employees.

Sophisticated schemes are now using deepfake profiles, remote working arrangements and outsourced support networks to evade geolocation and identity verification controls. There have been documented cases of individuals being hired within an hour of applying, later found to be entirely fictitious identities. These operatives are particularly targeting organisations in the midst of AI transformation, where demand for AI talent is high and hiring processes may be moving faster than screening processes can keep up.

If someone is consistently reluctant to appear on camera, has inconsistent working hours, or displays different voices or accents across calls, these are signals worth investigating. Traditional background checks are not sufficient: they can only flag individuals already on a database, not new threat actors.

Insider threats are not only external infiltration. There is also a significant and underappreciated risk from disgruntled or motivated employees using AI tools to conduct campaigns against their own organisations, including cases of AI-assisted harassment and extortion.

What this means for your organisation

  • Invest in SaaS visibility and log retention. You cannot investigate what you cannot see.
  • Treat identity verification as a continuous process, not a one-time check at onboarding.
  • Review your hiring processes for AI and technology roles, where fake identity schemes are most active.
  • Extend zero trust principles beyond network access to cover SaaS usage and inter-application trust.
  • Ensure your incident response capability is ready. The average dwell time for sophisticated attackers is long, and detection often comes late.

Download the Cyber Incident Response Plan Template

 

3. Governing AI agents is the next frontier, and most organisations are not ready

If securing human use of AI is a challenge, governing AI agents introduces an entirely new dimension of complexity. Agents operate autonomously, take actions on behalf of users, and increasingly interact with external services through protocols such as MCP (Model Context Protocol), a standard that enables AI agents to connect with tools and data sources.

The governance challenge here is substantial. Many organisations have little or no visibility into which MCP servers are running in their environment, what those servers are doing, or whether they are connecting to vetted or unvetted sources. This creates significant compliance, data and security risks, particularly as agentic workflows begin to touch sensitive systems.

The deployment risks are real: AI coding agents pulling unvetted packages from public repositories, agents exposing internal services without appropriate authentication, and prompt injection attacks where malicious content in the environment manipulates agent behaviour.

The move from shadow AI to governed AI is not just a security imperative. It is a prerequisite for scaling agentic workflows safely. Organisations that do not build governance frameworks now will face significantly greater remediation costs as agent use proliferates.

What this means for your organisation

  • Audit what MCP servers and AI agents are running in your environment today. Most organisations will be surprised.
  • Define a governance framework for agent deployment before scale makes it unmanageable.
  • Apply zero trust and least privilege principles to agentic workflows, not just human users.
  • Ensure DLP (Data Loss Prevention) controls extend to cover agent-generated prompts and responses, not just human-generated traffic.
  • Build logging and audit trails for agent activity from day one.

4. Digital sovereignty is becoming a board-level concern

As data regulation continues to evolve across jurisdictions and geopolitical risk rises, digital sovereignty has moved from a compliance checkbox to a strategic priority. The sessions on this theme highlighted a risk that many organisations are walking into: in attempting to solve for data residency, they are creating fragmented, complex environments that introduce new operational and security challenges.

Sovereignty, properly understood, is about three things:

  • Jurisdictional control: who has legal authority over your data, and under what framework
  • Service continuity: can your services be disrupted by geopolitical events or government action
  • Operational governance: who operates your systems, from where, and under what oversight model

The trap many organisations fall into is solving for data residency, focusing on where data sits at rest, while neglecting what happens to data in transit, who can access it during processing, and whether services can remain available if a jurisdiction restricts access.

Sovereignty is not a choice between compliance and a free-flowing internet. Organisations can and should aim for jurisdictional control without fragmenting their architecture or creating operational complexity that becomes impossible to manage.

What this means for your organisation

  • Map your data flows, not just your data stores. Where is data processed, inspected and logged, not just where it rests?
  • Assess your exposure to service disruption from geopolitical events. Do you have multi-region resilience and continuity plans?
  • Understand your encryption key custody. Do you control your private keys, or does your provider?
  • Review how your vendors handle government requests. Transparency reporting and due process commitments are a meaningful differentiator.
  • Treat sovereignty as a platform-level capability, not a product feature. It needs to span your entire environment.

5. The economics of the internet are shifting, and content owners need to act

The final major theme of the day addressed a structural challenge that is particularly relevant for organisations whose business model depends on web content and digital publishing. AI-powered answer engines and LLMs are fundamentally changing the economics of web traffic.

Historically, the internet operated on an implicit bargain: search engines and aggregators would crawl your content, and in return, they would drive visitors to your site. That traffic was the foundation of digital business models built on advertising, subscriptions and e-commerce.

That bargain is breaking down. LLMs increasingly deliver answers directly within their interfaces, without directing users to the original source. A health publisher at the event disclosed that their site had been crawled over 643 million times in a single month by AI bots, even after blocking specific crawlers. The content was being consumed without attribution, without traffic, and without revenue.

The consequences for journalism are particularly stark. The BBC's experience with Apple Intelligence is a case in point: AI-generated summaries of BBC news notifications were distributed to iPhone users falsely claiming, among other things, that a murder suspect had shot himself, that a darts player had won a championship before the match had taken place, and that a tennis player had come out as gay. The BBC had not authorised any of this. Apple subsequently suspended the feature and committed to updates, but the damage to trust had already been done.

Publishers have more power than they realise. As LLM providers compete on the quality of grounding and real-time information, and as pretraining costs rise, access to high-quality, original content is increasingly valuable. The organisations that lock down their content and negotiate from a position of strength will be better placed than those that wait.

The industry is beginning to respond. Bilateral licensing deals between publishers and AI companies are increasing, with more deals signed in 2025 than in 2024, and that trend is expected to continue. Pay-per-crawl models are also in development: a mechanism by which AI agents would be required to make a micropayment each time they access content, with publishers setting their own pricing and access rules.

The underlying challenge is collective action. Individual publishers acting alone have limited leverage, but the industry has not yet found effective ways to coordinate. In the meantime, the priority for any organisation with valuable content is clear: get control of who is accessing it, on what terms, and with what compensation.

What this means for your organisation

  • Audit your robots.txt and crawler access policies. Many organisations have not reviewed these since LLMs became dominant.
  • Understand the scale of AI bot traffic hitting your properties. The numbers are likely larger than expected.
  • Explore licensing frameworks with AI providers, particularly if you produce high-value, specialist or frequently updated content.
  • Consider the reputational risk of AI systems misrepresenting your content. Proactive engagement with major AI providers on content policies is increasingly important.
  • Monitor developments in pay-per-crawl and attribution standards. These will reshape digital content economics over the next 24 months.

Final thought

The Agentic Internet is not a future state. It is arriving now, and it is arriving unevenly. The organisations that will navigate it successfully are those that combine urgency with discipline: moving fast enough to capture the opportunities that AI presents, while building the governance frameworks, visibility infrastructure and security architecture to do so without unacceptable risk.

The themes explored here, covering AI adoption, threat evolution, agentic governance, digital sovereignty and content economics, are not independent problems. They are facets of the same underlying shift: a more automated, more interconnected, more contested internet in which the rules are still being written.

The decisions your organisation makes in the next 12 to 18 months will set the trajectory for the years ahead. The time to act is now.

How Cloudflare and CDS can help

Cloudflare sits at the heart of many of the challenges described in this post. As one of the world's largest network platforms, processing over 20% of all web traffic, Cloudflare provides organisations with the visibility, control and security infrastructure needed to operate safely in the age of the Agentic Internet. From securing workforce access to AI tools and governing AI agents, to enforcing digital sovereignty controls and protecting content from unauthorised crawling, Cloudflare's platform addresses these challenges at scale across a single, unified architecture.

CDS is an authorised Cloudflare service delivery partner with deep expertise in cybersecurity and cloud infrastructure. We work with organisations to translate Cloudflare's platform capabilities into practical, tailored solutions that address their specific risk profile, regulatory environment and operational context. Whether you are taking your first steps toward a zero trust architecture, building governance frameworks for AI adoption, or navigating the complexity of multi-jurisdictional data sovereignty, CDS brings the technical expertise and strategic insight to help you move forward with confidence.

If any of the themes in this post resonate with challenges you are facing, we would be glad to have that conversation. Get in touch with the CDS team to find out how we can help.

Want more content like this? Sign up to our monthly newsletter!