Vigil: An Open-Source AI SOC Built with a LLM-native Architecture

A new open source project, Vigil, launched at RSA today, enhances the transformative intelligence of rapidly advancing reasoning models, including Anthropic’s Claude. Available under an Apache 2.0 license, Vigil — created by DeepTempo — ships with13 specialized AI agents, 30+ integrations, and 7,200+ detection rules spanning Sigma, Splunk, Elastic, and KQL formats, according to the company.
Additionally, Vigil includes four initial production-tested multi-agent workflows that tie together underlying capabilities to address common use cases in the SOC: incident response, investigation, threat hunting, and forensic analysis. Users can easily add additional integrations, custom rules, and agents.
Teams bring their own enterprise model deployments, their own rule sets, and their own integrations for operational context. As reasoning models improve rapidly, those advances surface directly in analyst-facing workflows. As models improve, the architecture is structured so those advances surface directly in analyst-facing workflows.
Vigil is vendor-independent. Contributors are welcome from across the security ecosystem, including AI SOC vendors, internal security teams, services organizations, open-source maintainers, and developers building on MCP and agentic frameworks. The Trail of Bits skills repository represents one natural area of collaboration, offering reusable building blocks for cyber-specific reasoning that Vigil is designed to interoperate with via clear Claude skills definitions.
 
Extending Vigil is simple: multi-agent workflows are defined in a single SKILL.md file, tool integrations use the open MCP standard, and detection rules can be contributed in any major format. Every MCP server in the security ecosystem is a potential Vigil integration.Every skill someone writes makes the platform more capable for everyone.
Vigil is available now:
 
git clone –recurse-submodules https://github.com/deeptempo/vigil.git
cd vigil && ./start_web.sh
# Open http://localhost:6988 — your AI SOC is running. 

ArmorCode: The State of AI Risk Management

A report by The Purple Book Community, in conjunction with ArmorCode, documents the critical challenge known as “The Confidence Gap”. This gap is the measurable distance between security leaders’ confident beliefs about their AI governance programs and the stark reality of their operational postureAs AI adoption transitions from experimentation to an enterprise standard, these senior leaders—including CISOs, VPs, and Directors—believe they have control, yet the data reveals persistent, structural blind spots.

“The State of AI Risk Management” research, based on a survey of over 650 senior cybersecurity leaders across seven industries and two continents, highlights a significant disconnect between claims of mature posture and real-world outcomesFor instance, despite 86% of security leaders claiming to maintain a complete AI inventory and nearly 90% asserting full visibility into their AI footprint, 59% simultaneously admit that shadow AI is present and ungoverned within their organizationsThis “Shadow AI Paradox” is made concrete by the cross-tabulations showing that 57% of organizations claiming a complete inventory also acknowledge the presence of ungoverned shadow AI.

Another key finding is the “Detection Delusion.” While 92% of leaders trust their security tools to effectively find vulnerabilities in AI-generated code (with 83% saying their tools work), 70% report that confirmed or suspected vulnerabilities have already reached productionThis finding suggests that despite high confidence in security tools, vulnerabilities are still bypassing controlsFurthermore, 82% of security professionals cite tool sprawl as actively hindering their ability to prioritize and remediate the risks that truly matter, which is referred to as the “Prioritization Crisis”.

The report concludes that the core problem is not a lack of awareness among security leadersInstead, they are “lacking the ability to convert that awareness into governed action at the pace AI demands,” the study found. This widening gap between knowledge and control is becoming a critical operational liability as AI accelerates developmentThe report maps this gap across four core dimensions to provide a blueprint for closing the divide before it becomes a breach.

 Black Duck introduces AI security solution Black Duck Signal

Application security provider Black Duck today released Black Duck Signal, designed for AI-generated code, which is being created faster than it can be secured.

Black Duck is addressing that mismatch with an AI security model, called Context AI, trained on years of human-curated intelligence and validated vulnerability data. The solution deploys multiple AI agents that can not only detect issues but can assess if vulnerabilities can be exploited, then validate those findings and either guide remediation or automate it, the company said in its announcement.

Black Duck’s solution launch highlights a broader industry shift that fully autonomous software development remains aspirational, but accountable, secure software still requires human-informed oversight.

“Signal is built on an agentic AI architecture that goes beyond single-model analysis,” Black Duck said in the announcement. “Multiple specialized agents and models work together to analyze vulnerabilities, validate exploitability, prioritize risk and recommend or apply fixes using human-like logic—delivering more reliable outcomes across the software development life cycle.”

Cribl introduces capability to uncover hidden sensitive data patterns

AI-driven background detection that can continuously scan logs, traces and events to uncover unknown sensitive data is a new capability built into Cribl Guard, the company announced today.

Cribl, the AI Platform for Telemetry, said the new capability helps security teams find those data risks before they are exposed, including patterns of identifiable information, regulated data and secrets.

“Security and IT teams don’t want to enable AI and agentic assistants on sensitive data and face costly, time-consuming cleanups. By analyzing data flowing through pipelines, background detection catches sensitive information in flight before it even gets to a data store,” said Dritan Bitincka, co-founder & chief product officer of Cribl. “This helps organizations transition from static policy enforcement to continuous, AI-driven risk discovery and mitigation.”

Background detection is powered by Cribl’s telemetry AI models that identify new, unknown sensitive data, that immediately surface the finding in the Cribl interface. In the announcement, the company wrote: “By keeping a custom AI model in the [Cribl] Worker, a node where the data is being emitted and constantly analyzing data streams in the background, Cribl helps prevent unexpected sensitive data exposures before they become incidents, minimizing financial and operational impacts for the enterprise.”