
Vigil: An Open-Source AI SOC Built with a LLM-native Architecture
-
GitHub repository: https://github.com/deeptempo/
vigil -
License: Apache 2.0
-
Community Discord: discord.gg/vigil-soc
-
Website: www.vigilsoc.org
-
Office hours and contributor information: available via the community Discord and GitHub
ArmorCode: The State of AI Risk Management
A report by The Purple Book Community, in conjunction with ArmorCode, documents the critical challenge known as “The Confidence Gap”. This gap is the measurable distance between security leaders’ confident beliefs about their AI governance programs and the stark reality of their operational posture. As AI adoption transitions from experimentation to an enterprise standard, these senior leaders—including CISOs, VPs, and Directors—believe they have control, yet the data reveals persistent, structural blind spots.
“The State of AI Risk Management” research, based on a survey of over 650 senior cybersecurity leaders across seven industries and two continents, highlights a significant disconnect between claims of mature posture and real-world outcomes. For instance, despite 86% of security leaders claiming to maintain a complete AI inventory and nearly 90% asserting full visibility into their AI footprint, 59% simultaneously admit that shadow AI is present and ungoverned within their organizations. This “Shadow AI Paradox” is made concrete by the cross-tabulations showing that 57% of organizations claiming a complete inventory also acknowledge the presence of ungoverned shadow AI.
Another key finding is the “Detection Delusion.” While 92% of leaders trust their security tools to effectively find vulnerabilities in AI-generated code (with 83% saying their tools work), 70% report that confirmed or suspected vulnerabilities have already reached production. This finding suggests that despite high confidence in security tools, vulnerabilities are still bypassing controls. Furthermore, 82% of security professionals cite tool sprawl as actively hindering their ability to prioritize and remediate the risks that truly matter, which is referred to as the “Prioritization Crisis”.
The report concludes that the core problem is not a lack of awareness among security leaders. Instead, they are “lacking the ability to convert that awareness into governed action at the pace AI demands,” the study found. This widening gap between knowledge and control is becoming a critical operational liability as AI accelerates development. The report maps this gap across four core dimensions to provide a blueprint for closing the divide before it becomes a breach.
Black Duck introduces AI security solution Black Duck Signal
Application security provider Black Duck today released Black Duck Signal, designed for AI-generated code, which is being created faster than it can be secured.
Black Duck is addressing that mismatch with an AI security model, called Context AI, trained on years of human-curated intelligence and validated vulnerability data. The solution deploys multiple AI agents that can not only detect issues but can assess if vulnerabilities can be exploited, then validate those findings and either guide remediation or automate it, the company said in its announcement.
Black Duck’s solution launch highlights a broader industry shift that fully autonomous software development remains aspirational, but accountable, secure software still requires human-informed oversight.
“Signal is built on an agentic AI architecture that goes beyond single-model analysis,” Black Duck said in the announcement. “Multiple specialized agents and models work together to analyze vulnerabilities, validate exploitability, prioritize risk and recommend or apply fixes using human-like logic—delivering more reliable outcomes across the software development life cycle.”
Cribl introduces capability to uncover hidden sensitive data patterns
AI-driven background detection that can continuously scan logs, traces and events to uncover unknown sensitive data is a new capability built into Cribl Guard, the company announced today.
Cribl, the AI Platform for Telemetry, said the new capability helps security teams find those data risks before they are exposed, including patterns of identifiable information, regulated data and secrets.
“Security and IT teams don’t want to enable AI and agentic assistants on sensitive data and face costly, time-consuming cleanups. By analyzing data flowing through pipelines, background detection catches sensitive information in flight before it even gets to a data store,” said Dritan Bitincka, co-founder & chief product officer of Cribl. “This helps organizations transition from static policy enforcement to continuous, AI-driven risk discovery and mitigation.”
Background detection is powered by Cribl’s telemetry AI models that identify new, unknown sensitive data, that immediately surface the finding in the Cribl interface. In the announcement, the company wrote: “By keeping a custom AI model in the [Cribl] Worker, a node where the data is being emitted and constantly analyzing data streams in the background, Cribl helps prevent unexpected sensitive data exposures before they become incidents, minimizing financial and operational impacts for the enterprise.”
