The internet is changing, and it is changing fast. We recently attended Cloudflare Connect on Tour which focused on the Agentic Internet. We spent a day immersed in some of the most pressing conversations happening across cybersecurity, AI adoption, infrastructure resilience and digital sovereignty. The sessions brought together senior security leaders, developers, architects and technology executives, all grappling with a shared set of challenges.
This post sets out the key themes and what we believe they mean for organisations navigating this rapidly shifting landscape. Whether you are a CISO, a CTO, a developer or a senior leader with responsibility for technology risk, there is something here for you.
One of the clearest messages from the day was this: the pressure to adopt AI is real, it is coming from boards and investors, and it is not going away. According to Gartner research cited during the event, 64% of CEOs are under pressure to accelerate AI adoption, and 97% of security executives feel urgency around it. Perhaps most starkly, the average cost of an AI security incident is already estimated at $4.5 million.
The risk of AI is here now. The question is no longer whether to adopt. It is how to do so without creating new attack surfaces or exposing sensitive data.
Organisations today tend to fall into one of four broad postures when it comes to AI:
Most organisations sit somewhere in the middle. The challenge is that the threat landscape does not wait for readiness. Adversaries are already using AI to accelerate attacks, and the gap between cautious adopters and exposed organisations is widening.
Senior leaders need to be asking: do we have visibility into what AI tools our employees are actually using? Many organisations are surprised to discover the breadth of AI adoption already happening in the shadows, from productivity tools to AI coding assistants to generative chat applications carrying sensitive prompts.
Visibility is the foundation. Without it, you cannot apply the controls needed to protect sensitive data, enforce acceptable use policies, or respond to incidents. The critical questions to get on your agenda:
If you'd like to get a quick idea of your AI security threat landscape,take our AI risk assessment!
The cybersecurity sessions painted a sobering picture. The attack surface is not just expanding. It is being actively weaponised by increasingly capable adversaries who are using AI to operate at a speed and scale that would have been impossible just a few years ago.
Several trends stand out as particularly concerning for 2026 and beyond:
The proliferation of SaaS applications has created a vast and poorly monitored attack surface. In one case discussed at the event, a single compromised identity in a SaaS environment led to over 1,000 downstream environments being affected, enabled by the attacker's use of AI tools to move quickly and at scale. The ShinyHunters threat group, active in early 2025, targeted victims indiscriminately by industry, using enablement data to proliferate access across tenants.
Many organisations lack the logging infrastructure to even detect these breaches. With data spread across five, ten or more SaaS applications, the web of potential exposure is enormous, and most businesses have limited visibility into what is happening across that web.
Beyond financially motivated cybercrime, state-sponsored actors represent a different and arguably more serious category of threat. These groups are engaged in long-term, strategic positioning: compromising critical infrastructure not for immediate gain, but to establish footholds that can be activated during geopolitical events. Unlike opportunistic criminals looking for a quick win, these actors are patient and persistent. Telecommunications networks, government systems and energy infrastructure are among the targets.
Perhaps the most alarming theme from the threat landscape sessions was the industrialisation of insider threats, specifically the use of fake identities to infiltrate organisations as employees.
Sophisticated schemes are now using deepfake profiles, remote working arrangements and outsourced support networks to evade geolocation and identity verification controls. There have been documented cases of individuals being hired within an hour of applying, later found to be entirely fictitious identities. These operatives are particularly targeting organisations in the midst of AI transformation, where demand for AI talent is high and hiring processes may be moving faster than screening processes can keep up.
If someone is consistently reluctant to appear on camera, has inconsistent working hours, or displays different voices or accents across calls, these are signals worth investigating. Traditional background checks are not sufficient: they can only flag individuals already on a database, not new threat actors.
Insider threats are not only external infiltration. There is also a significant and underappreciated risk from disgruntled or motivated employees using AI tools to conduct campaigns against their own organisations, including cases of AI-assisted harassment and extortion.
Download the Cyber Incident Response Plan Template
If securing human use of AI is a challenge, governing AI agents introduces an entirely new dimension of complexity. Agents operate autonomously, take actions on behalf of users, and increasingly interact with external services through protocols such as MCP (Model Context Protocol), a standard that enables AI agents to connect with tools and data sources.
The governance challenge here is substantial. Many organisations have little or no visibility into which MCP servers are running in their environment, what those servers are doing, or whether they are connecting to vetted or unvetted sources. This creates significant compliance, data and security risks, particularly as agentic workflows begin to touch sensitive systems.
The deployment risks are real: AI coding agents pulling unvetted packages from public repositories, agents exposing internal services without appropriate authentication, and prompt injection attacks where malicious content in the environment manipulates agent behaviour.
The move from shadow AI to governed AI is not just a security imperative. It is a prerequisite for scaling agentic workflows safely. Organisations that do not build governance frameworks now will face significantly greater remediation costs as agent use proliferates.
As data regulation continues to evolve across jurisdictions and geopolitical risk rises, digital sovereignty has moved from a compliance checkbox to a strategic priority. The sessions on this theme highlighted a risk that many organisations are walking into: in attempting to solve for data residency, they are creating fragmented, complex environments that introduce new operational and security challenges.
Sovereignty, properly understood, is about three things:
The trap many organisations fall into is solving for data residency, focusing on where data sits at rest, while neglecting what happens to data in transit, who can access it during processing, and whether services can remain available if a jurisdiction restricts access.
Sovereignty is not a choice between compliance and a free-flowing internet. Organisations can and should aim for jurisdictional control without fragmenting their architecture or creating operational complexity that becomes impossible to manage.
The final major theme of the day addressed a structural challenge that is particularly relevant for organisations whose business model depends on web content and digital publishing. AI-powered answer engines and LLMs are fundamentally changing the economics of web traffic.
Historically, the internet operated on an implicit bargain: search engines and aggregators would crawl your content, and in return, they would drive visitors to your site. That traffic was the foundation of digital business models built on advertising, subscriptions and e-commerce.
That bargain is breaking down. LLMs increasingly deliver answers directly within their interfaces, without directing users to the original source. A health publisher at the event disclosed that their site had been crawled over 643 million times in a single month by AI bots, even after blocking specific crawlers. The content was being consumed without attribution, without traffic, and without revenue.
The consequences for journalism are particularly stark. The BBC's experience with Apple Intelligence is a case in point: AI-generated summaries of BBC news notifications were distributed to iPhone users falsely claiming, among other things, that a murder suspect had shot himself, that a darts player had won a championship before the match had taken place, and that a tennis player had come out as gay. The BBC had not authorised any of this. Apple subsequently suspended the feature and committed to updates, but the damage to trust had already been done.
Publishers have more power than they realise. As LLM providers compete on the quality of grounding and real-time information, and as pretraining costs rise, access to high-quality, original content is increasingly valuable. The organisations that lock down their content and negotiate from a position of strength will be better placed than those that wait.
The industry is beginning to respond. Bilateral licensing deals between publishers and AI companies are increasing, with more deals signed in 2025 than in 2024, and that trend is expected to continue. Pay-per-crawl models are also in development: a mechanism by which AI agents would be required to make a micropayment each time they access content, with publishers setting their own pricing and access rules.
The underlying challenge is collective action. Individual publishers acting alone have limited leverage, but the industry has not yet found effective ways to coordinate. In the meantime, the priority for any organisation with valuable content is clear: get control of who is accessing it, on what terms, and with what compensation.
The Agentic Internet is not a future state. It is arriving now, and it is arriving unevenly. The organisations that will navigate it successfully are those that combine urgency with discipline: moving fast enough to capture the opportunities that AI presents, while building the governance frameworks, visibility infrastructure and security architecture to do so without unacceptable risk.
The themes explored here, covering AI adoption, threat evolution, agentic governance, digital sovereignty and content economics, are not independent problems. They are facets of the same underlying shift: a more automated, more interconnected, more contested internet in which the rules are still being written.
The decisions your organisation makes in the next 12 to 18 months will set the trajectory for the years ahead. The time to act is now.
Cloudflare sits at the heart of many of the challenges described in this post. As one of the world's largest network platforms, processing over 20% of all web traffic, Cloudflare provides organisations with the visibility, control and security infrastructure needed to operate safely in the age of the Agentic Internet. From securing workforce access to AI tools and governing AI agents, to enforcing digital sovereignty controls and protecting content from unauthorised crawling, Cloudflare's platform addresses these challenges at scale across a single, unified architecture.
CDS is an authorised Cloudflare service delivery partner with deep expertise in cybersecurity and cloud infrastructure. We work with organisations to translate Cloudflare's platform capabilities into practical, tailored solutions that address their specific risk profile, regulatory environment and operational context. Whether you are taking your first steps toward a zero trust architecture, building governance frameworks for AI adoption, or navigating the complexity of multi-jurisdictional data sovereignty, CDS brings the technical expertise and strategic insight to help you move forward with confidence.
If any of the themes in this post resonate with challenges you are facing, we would be glad to have that conversation. Get in touch with the CDS team to find out how we can help.