AI is being added to business processes faster than it is being secured, creating a wide gap that attackers are already exploiting, according to the SANS Institute.
The scale of the problem
Attackers are using AI to work at speeds that humans cannot match. Phishing messages are more convincing, privilege escalation happens faster, and automated scripts can adjust mid-attack to avoid detection. The report highlights research showing that AI-driven attacks can move more than 40 times faster than traditional methods. This means a breach can happen before a defender even sees the first alert.
Inside many security operations centers, AI is being added without a plan. Forty-two percent of SOCs surveyed said they are using AI and machine learning tools straight out of the box, without custom rules or integrations. Few have playbooks for AI-specific threats like prompt injection or model poisoning. Many teams also lack visibility into how AI systems behave, which creates blind spots that attackers can exploit.
This lack of readiness is particularly challenging for smaller SOCs that operate with limited staff. Rob T. Lee, Chief of Research and Chief AI Officer at SANS Institute, told Help Net Security that when resources are tight, CISOs should focus on one investment that provides both security and operational efficiency.
“The most important AI security investment for C-level executives during this year should be an adoption-led control plane,” Lee said. “This enables employees to access approved AI tools through a protected environment that includes fundamental security measures for access control, data protection, model tracking and monitoring. It allows users to perform their work tasks through AI while security teams maintain visibility into AI operations across all data domains.”
Lee added that CISOs should measure the success of this approach through tangible outcomes rather than abstract benchmarks.
“Success isn’t measured in abstract scores, but with clarity and control,” he explained. “The system demonstrates success through reduced, unauthorized AI tool usage and increases project flow through official channels, raising adoption of authorized platforms. The combination of specific results demonstrates how the system protects data within the organization while reducing AI incidents and enabling employees to create new solutions within established boundaries.”
A framework for secure AI
To help organizations close these gaps, the report outlines a three-part framework: Protect AI, Utilize AI, and Govern AI.
Protect AI covers the technical work needed to keep models, data, and infrastructure secure. This includes access controls, encryption, testing, and continuous monitoring. It also focuses on new attack types like model poisoning, where training data is tampered with, and prompt injection attacks that trick systems into leaking sensitive data or executing harmful commands.
Utilize AI focuses on helping defenders use AI to strengthen their own operations. The report stresses that SOCs must integrate AI into detection and response if they want to keep up with AI-driven attacks. Automation can help reduce analyst workload and speed up decision-making, but only if it is implemented carefully and monitored closely.
Lee emphasized that automation should also play a key role in defending against AI-driven phishing and voice impersonation attacks.
“The protection of small security teams requires early detection systems to fight AI-based phishing and voice impersonation attacks,” Lee said. “The initial step for defense should involve AI-powered email and call screening tools which detect obvious scams while SOAR/XDR playbooks should automatically dismiss low-confidence alerts to show analysts only important threats. This method reduces unnecessary alerts instead of producing additional noise which benefits organizations with limited security resources.”
He also highlighted identity protection as a critical layer of defense.
“The complete elimination of credential theft becomes possible through the implementation of FIDO2/WebAuthn passkeys as password and legacy MFA replacements while all sensitive requests need verification through a secondary channel,” Lee said. “The training program should concentrate on frontline staff members who work in finance, HR and patient services because they face the greatest risk of receiving AI-generated deceptive content.”
Pressure from regulators
AI regulation is evolving quickly. The EU AI Act, the NIST AI Risk Management Framework, and various national policies are creating new expectations for transparency and accountability. Failing to keep up can be costly. In one recent case, a European company was fined millions for not being able to provide a record of its AI systems and data sources after an incident.