CCS Guest Blog
We are already well into the year, and this matters. Predictions help leaders make choices under uncertainty. But a quarter of the way through the year, we also have early evidence of what is sticking, what is stalling, and what is being reprioritised. We have seen geopolitical shocks, ongoing conflicts, regulatory pressure, and a financial market mood that has reminded everyone that technology spending is not immune to uncertainty. A notable sell-off in several technology stocks has sharpened board-level questions about payback, timing, and operational risk.
So while this is not a January crystal ball piece, it is a mid-year orientation for enterprise Information and Communication Technology (ICT). Ostensibly to outline where things are moving, why, and what senior leaders can operationalise in the next six months.
One thread runs through everything: the control plane. Not the marketing version, but the real one.
The control plane is where identity and permissions live, where policies are enforced, where audit evidence is collected, and where automation is allowed (or blocked). In 2026, it is increasingly the difference between organisations that can safely scale cloud, security and AI, and those that can only pilot them.
Below are six themes that explain why the control plane is the new battlefield. They map cleanly to what enterprise leaders are wrestling with: governed operations, sovereignty, platform discipline, and AI systems that increasingly act rather than just suggest.
If this sounds less like a shiny technology roadmap and more like an operating discipline, that’s because it is. 2026 is not short of ambition, it is however, short of patience for anything that cannot be run, governed, and proved.
Proof Beats Promise
For most enterprises, scrutiny has hardened. Budgets may still grow, but leaders are being pushed to show operational proof, not just intent.
That proof is practical. It shows up in risk reduction (faster detection, faster containment, fewer repeat incidents), friction removal (fewer manual handoffs, fewer workarounds, smoother access), and audit-ready evidence that clearly shows who did what, when, under what policy, and with what result.
This matters because many 2026 bets, cloud foundations, security overhaul, AI platforms and agentic automation are operating commitments, not one-off projects. In other words, they do not just need to work once in a demo. They need to behave on a bad day.
A six-month move: build a short “evidence pack” for each major initiative. Define the outcome metric, the control-plane signals that prove it, and the rollback or containment path if it fails. Big promises are easy when conditions are calm. The test is whether the system behaves under messy conditions.
This shift also raises the bar for anyone selling into enterprises. Buyers are increasingly asking for proof artefacts, not just reference stories: operational metrics, control documentation, and claims that can be tested in live environments. “Trust us” is not a strategy; it is a gap to be filled with evidence.
Sovereignty Becomes a Test Suite
Sovereignty has moved from political headline to procurement reality. But the market is also learning that sovereignty is not a label; it is a set of controls.
The conversation is shifting from "Where is my data?" to "Who controls the system when something breaks?"
That second question forces clarity on control-plane essentials, especially regarding who controls cryptographic keys and emergency access; where identity, logging, and audit trails are operated and what is retained; who can administer the environment, from where, and under what rules; and what exit really costs in time, disruption, and dependencies.
In 2026, serious buyers will increasingly treat sovereignty as a procurement test suite, using a small set of pass-or-fail tests on sensitive workloads. The pragmatic advice is to start small and real, then pick the minimum tests you will enforce and apply them consistently. Sovereignty done properly can add cost and complexity, which is exactly why evidence over reassurance matters.
This is also where the market will separate the serious from the simply well-worded. Providers and partners that can present a deft, auditable evidence pack of sovereignty (controls, roles, logs, incident behaviour, and exit feasibility) will be easier to trust than those relying on geography, branding, or a reassuring diagram. The diagram can stay, it just cannot be the only thing doing the work.
Portability is Engineered, Not Granted
Competition pressure and regulatory oversight are shaping the market. That is real. But it will not solve lock-in for you.
Lock-in is not only contractual but also operational, and it often sits in data egress economics, identity and policy coupling, and hard-to-unpick integration sprawl. In other words, it is in the control plane and the operating model. If lock-in were just a contract problem, it would be far less common.
The blunt prediction is also a practical instruction that states portability must be engineered. This does not mean make everything portable, but it does mean being selective. Identify the two or three dependencies that would be most critical (identity, security telemetry, core data, critical workloads), build realistic exit plans (including dual-running and timeline assumptions), and run at least one exit drill on a meaningful workload to expose hidden coupling.
The point is not to move for the sake of it, but to prove you could move if conditions forced your hand.
That reality is starting to reshape how platforms are judged. Products and services that make dependencies visible, reduce migration theatre, and support realistic dual-running will look more attractive than those that treat exit as someone else’s problem. You can leave whenever you like should no longer be the slogan, as it is becoming a design and contractual expectation.
Agents Need Controls, Not Reassurance
Generative AI is now common enough that it is no longer a differentiator, but running it safely at scale is.
Agentic AI raises the bar again because agents can trigger actions such as changing records, initiating workflows, provisioning resources, or affecting customer outcomes. This is where human-in-the-loop language can become a false comfort. Humans can be overloaded, pressured to approve quickly, or asked to validate decisions they cannot realistically test. A loop that exists in theory but fails in practice is still a failure; it just has better branding.
So, the operational question becomes what controls exist at the action level?
In 2026, enterprises will increasingly treat agent readiness as a control-plane discipline. They will expect agents to be bound to named identities with least-privilege permissions; they will tier actions by risk (some actions can be automatic, others require confirmation, and higher-risk actions should require stronger approval); they will demand traceability for every action; and they will insist on safe defaults, such as pause or stop switches and rollback by design.
A six-month move should see treating agents like a production change. Start with bounded workflows, require runbooks and exception handling, and test bad day behaviour (bad data, partial outage, conflicting permissions) before scaling.
One extra point deserves attention: ethics becomes operational when systems can act. If an agent can change a customer record, approve a payment, open access, or trigger enforcement, then fairness, accountability and explainability stop being abstract principles. They become guardrails you can test to see what the agent is allowed to do, what it is not allowed to do, and what evidence it must produce each time it acts.
This is also where agent platform builders will be judged most harshly and, frankly, most fairly. Enterprises will gravitate to control-layer features (identity binding, policy enforcement, audit trails, rollback) and will lose patience with autonomy claims that cannot be bounded, monitored, or explained. If the pitch is: it will be fine because a human is involved, the next question will be which human, how often, and with what proof?
Security Becomes “Less Noise, Fewer Workarounds”
Security remains a top priority, but many organisations are stuck in a loop of buying more tools, generating more alerts, while still struggling to respond fast and consistently.
In 2026, security strategy will shift from coverage to operability with correlation across environments, fewer false positives, faster decisions, and clearer accountability – again, a control-plane problem as much as a tooling problem. There is a simple rule of thumb to observe: if your security team spends most of its time arguing about alerts, your attackers are being given far too much peace.
Two behavioural truths are increasingly hard to ignore. The first is where security friction creates shadow behaviour: if secure behaviour is difficult, people route around it. That is an operational risk, not a user problem. Secondly, shadow AI presents the same dynamic with higher compliance stakes: if approved AI tools are hard to access or poorly integrated, workarounds become predictable.
This is why making the safe path the easy path will become a serious security design principle in 2026. It will mean smoother access journeys, fewer unnecessary prompts, and governed AI embedded in everyday tools, paired with monitoring and safe defaults. Consolidation will also rise, but it should be justified by response effectiveness and manageability, not by vendor-count reduction alone.
You can already see where this puts pressure on security suppliers: feature lists matter less than time-to-respond and clarity-to-operate. The winning stacks will be those that reduce noise and produce evidence, rather than simply generating more alerts and calling it “visibility.”
Constraints are Real: Cost, Power, Supply Chains and Geopolitics
The market is not operating in a calm environment. AI infrastructure is expensive. Capacity constraints still matter. Power planning is increasingly strategic. Add geopolitics, and technology is being pulled further into the critical infrastructure frame. That shows up in export controls, localisation pressure, public-sector scrutiny, and shifting attitudes to cross-border administration of systems.
The practical point is not to become a geopolitical analyst, but to accept that volatility can become an operational constraint. A strategy that assumes stable conditions has a habit of becoming yesterday’s strategy surprisingly quickly.
In 2026, more organisations will map where critical services are controlled from (not just where data sits), test supplier restriction scenarios, and treat concentration risk in AI stacks as something to manage rather than ignore. The goal is selective resilience with contingency options for identity, logging, security monitoring, and core platforms that focus on what would hurt most, not everything at once.
Financial volatility reinforces the same discipline. When sentiment shifts, leaders get asked harder questions. Proof beats promise again.
How to Operationalise this in the Next Six Months
If the control plane is the battlefield, the next six months are about securing the high ground with practical moves, not grand redesigns. Therefore, the following should be considered:
- Build evidence packs for your top initiatives: outcome metrics, proof signals, rollback paths.
- Set a sovereignty test suite for sensitive workloads: keys, admin access, logs, exit realism.
- Engineer exit realism for your riskiest dependencies: pick a few, plan dual-running, run a drill.
- Treat agents like production change: bounded workflows, traceability, safe defaults, failure testing.
- Reduce security friction deliberately: redesign one high-burn workflow and measure the outcome.
- Plan for constraints: capacity, supplier concentration, and restriction scenarios for core services.
None of this is glamorous or especially new. That is the point.
The direction of travel for 2026 is clear: credibility comes from operational discipline, where systems that can be governed, tested and proven, especially when conditions are unstable. The future is not only about what technology can do. It is about what you can control. It is the basis for adaptation and progression, together with a certain level of predictability.

