Do you need good advice on how to fix security issues caused by AI?

SCWorld.com reported that “AI has been deployed faster than the industry can secure it. Whether it’s LLM-based assistants, GenAI-powered workflows, or agentic AI automating decisions, traditional security tooling was never designed for this. Firewalls, EDR, SIEM, DLP—none were built for models that hallucinate, systems that evolve, or prompts that function as covert execution environments.”  The May 27, 2025 article entitled “Why AI breaks the traditional security stack — and how to fix it” (https://tinyurl.com/43v5u5dd) included these comments about “A new toolset for AI security”:

A robust security posture requires a layered defense, one that accounts for each phase of the AI pipeline and anticipates how AI systems are manipulated both directly and indirectly. Here are a few categories to prioritize:

1. Model scanners and red teaming.

Static scanners look for backdoors, embedded biases, and unsafe outputs in the model code or architecture. Dynamic tools simulate adversarial attacks to test runtime behavior. Complement these with red teaming for AI—testing for injection vulnerabilities, model extraction risks, or harmful emergent behavior.

2. AI-specific vulnerability feeds.

Traditional CVEs don’t capture the rapidly evolving threats in AI. Organizations need real-time feeds that track vulnerabilities in model architectures, emerging prompt injection patterns, and data supply chain risks. This information helps prioritize patching and mitigation strategies unique to AI.

3. Access controls for AI.

AI models often interact with vector databases, embeddings (numerical representations of meaning used to compare concepts in high-dimensional space), and unstructured data, making it difficult to enforce traditional row- or field-level access control. AI-aware access can help regulate what content gets used during inference and ensure proper isolation between models, datasets, and users.

4. Monitoring and drift detection.

AI is dynamic—it learns, it adapts, and sometimes it drifts. Organizations need monitoring capabilities that track changes in inference patterns, detect behavioral anomalies, and log full input-output exchanges for forensics and compliance. For agentic AI, that includes tracking decision paths and mapping activity across multiple systems.

5. Policy enforcement and response automation.

Real-time safeguards that act like “AI firewalls” can intercept prompts or outputs that violate content policies, such as generating malware or leaking confidential information. Automated response mechanisms can quarantine models, revoke credentials, or roll back deployments within milliseconds—faster than a human could possibly intervene.

Sounds like good advice to me….what do you think?

Previous
Previous

 Is Microsoft taking control of AI for the world?

Next
Next

Cybersecurity risks on the increase with infosec layoffs!