By Doug Green
“AI is ready to enforce decisions at scale—but it’s not ready to make them.”
In a recent Telecom Reseller podcast, I spoke with Chris Bonavita, Vice President of Strategy and Technology Adoption at GTT Communications, about one of the most important—and often misunderstood—shifts happening in AI-driven cybersecurity.
As enterprises move aggressively toward autonomous AI inside the Security Operations Center (SOC), Bonavita argues the industry is getting ahead of itself. The problem isn’t whether AI is powerful—it clearly is. The problem is where that power is being applied.
Today’s AI is exceptionally good at ingesting massive volumes of data, identifying patterns, detecting anomalies, and executing defined tasks at machine speed. In the SOC, that translates into real, measurable value. AI is already improving threat detection, accelerating response times, and reducing the burden of repetitive operational work.
But there is a line—and according to Bonavita, the industry is starting to cross it too quickly.
AI, he explains, does not understand intent. It does not understand business context. And it cannot reliably distinguish between what is technically possible and what is operationally appropriate. That distinction matters in cybersecurity, where decisions carry financial, operational, and reputational consequences.
This is where the concept of “AI should enforce, not decide” becomes critical.
In this model, humans define policy, intent, and acceptable risk. AI then executes—consistently, continuously, and at scale. It becomes the enforcement engine, not the decision-maker.
When that boundary is ignored, new risks begin to emerge.
Bonavita points to issues like policy drift, where AI systems begin to deviate from original intent over time, and agent conflict, where multiple automated systems act on overlapping or contradictory instructions. In a dynamic environment without clear human control, these issues can compound quickly, creating unintended disruptions or even new vulnerabilities.
At the same time, the threat landscape is evolving just as rapidly.
Attackers are now using AI to develop threats faster, automate reconnaissance, and adapt in real time. Defenders are responding with AI-driven detection and remediation. The result is an environment where both sides are operating at machine speed—forcing organizations to rethink how security decisions are made and executed.
Compounding the challenge is the disappearance of the traditional network perimeter. Data, users, and applications now exist everywhere, and access is no longer confined to a controlled environment. In this perimeter-less world, both threats and defenses are distributed—and AI is embedded across both.
For enterprises, the takeaway is not to slow down AI adoption—but to rethink how it is deployed.
The goal is not autonomy. The goal is scale with control.
That means building architectures where human intent remains central, and AI is used to enforce that intent across increasingly complex environments. It also aligns closely with GTT’s broader strategy, including its Envision platform and SASE-based approach to networking and security, where orchestration and policy consistency are foundational.
Looking ahead, the question is not whether AI will play a central role in cybersecurity—it already does. The real question is whether organizations can maintain control as AI capabilities continue to expand.
As this conversation makes clear, the most effective model may not be AI replacing human decision-making—but human-directed AI operating at a speed and scale no human team could match.
Learn more: https://www.gtt.net/