Insights
Shayan Shafii

AI for Security: It's time to get over our trust issues

In the nearly three years since ChatGPT’s release, we’ve seen a number of enterprise teams move quickly to adopt AI: e.g. Customer Support (Decagon, Sierra), Engineering (Cursor, Windsurf), IT (Glean), Legal (Harvey, EvenUp) and more. The rapid adoption of these new AI tools is a testament not only to the technology’s potential, but also to Security teams – who, to their credit, have thus far largely avoided becoming the “department of no.” In green-lighting these purchases, CISOs have supported innovation rather than stifling it, a distinction that many Security leaders are now wearing with pride.

However, while CISOs have enabled their peers to move forward with AI, they’ve been a bit more hesitant when it comes to adopting it on their own turf. Known for caution, Security leaders have spent the last few years circling this new technology and working through a set of entirely understandable trust issues. Now, with more exposure under their belt, and with a few tangible success stories from adjacent teams, it feels like we’re approaching a tipping point. CISO perspectives on AI have shifted from skepticism to curiosity, and in some cases, to action.

This shift is well overdue. When you consider the type of work that LLMs have proven to be remarkably good at – e.g. text heavy, high volume, and time-sensitive tasks – it is clear that Security is one of the best customers for AI. This is a team whose budget supports 20+ public companies, and where the idea of reaching “inbox zero” has been a fantasy for decades. Surely AI can add some value here? Fortunately, dozens of founders agree, and the market has responded with some of the most exciting new cybersecurity companies to have emerged in years. Across almost every role, there is a task for which an AI-native platform would be more suitable, and it’s only a matter of time before they take hold. 

Understanding the Landscape

Below, I’ve outlined just seven of the fields being transformed by talented Security founders leveraging the latest advancements in AI. The list of companies is certainly not exhaustive, and for the sake of simplification, I’ve bundled some areas together based on shared budget.

Vulnerability Management

Vulnerability Management tools have historically focused on scanning assets to discover security risks. In other words, they tend to find issues, rather than fix them. Companies in this category can be organized based on the specific types of assets that they scan: e.g. Tenable, Qualys, and Rapid7 (all public companies!) collectively garnered over $2B in revenue in 2024, largely on the back of their flagship product – a vulnerability scanner for corporate endpoints and internal networks. Wiz, another venture darling recently acquired for $32B, shattered records scaling to nearly $1B ARR in 5 years by just selling a vulnerability scanner for cloud infrastructure. Snyk, yet another breakout success, scaled to over $300M ARR by selling scanners for application source code and libraries.

The problem is, as enterprise environments have become larger and more diverse, so too have the outputs from these scanners. Today, it is completely normal for Security teams to come into the office and have tens of thousands of open alerts from these tools – a deafening volume that vulnerability practitioners have quite literally become numb to. While the alerts have piled up at software speeds, they have thus far only been closed out at human speeds. To make matters worse, the number of known vulnerabilities (CVEs) is exploding every year, while the time-to-exploit for new CVEs is simultaneously shrinking every year. 

Enter new AI platforms like Cogent Security and Maze. These companies use AI to fight the dynamic outlined above, using LLMs to (1) actually read every finding, (2) understand where they fit in the broader context of the enterprise, (3) determine their exploitability, and (4) perform the necessary remediation and patching work, if needed. In the future, customers of these tools will be able to wake up in the morning and simply review a digestible feed of decisions made by their AI Vulnerability Analyst. Soon enough, “inbox zero” could actually become a reality.

Security Data Pipelines

The notable success in this category is Cribl. Founders Clint, Dritan and Ledion all previously worked at Splunk, where they witnessed firsthand the challenges that enterprises faced in managing (and paying for…) the exploding volume of their security telemetry. As enterprises faced increasing compliance pressures, they were required to store more telemetry for longer periods of time, which in turn dramatically inflated the cost of their SIEMs. Today, it’s quite standard for the largest enterprises to be paying 7-8 figures annually for their SIEM platforms.

Cribl’s solution for this problem was to deliver a managed pipeline that intelligently filters logs, allowing enterprises to have their cake and eat it too: keep all of the logs that you need for security/compliance requirements, while paying a semi-reasonable price for your SIEM. For every dollar that customers spend on Cribl, they save a multiple of that on their SIEM costs. Given the immediate ROI on the purchase, Cribl continues to be a clear success, scaling to $100M ARR in their first 4 years, and quickly crossing $200M ARR shortly thereafter.

Given that (a) this market is still in its nascency, and (b) security logs are essentially just text, a whole crop of new AI startups have emerged recently, leveraging LLMs to drive further efficiency. Companies like Beacon Security, Abstract Security, CeTu, Observo AI, and more are aiming to differentiate in two ways: (1) using AI to drive even more data reduction than their pre-LLM counterparts, and (2) supporting a wider variety of data sources and destinations. Given that Splunk’s latest public filing showed over $4.2B in recurring revenue, we believe that this market has a tremendous amount of whitespace, and that the opportunity is still there for an AI-native platform to take hold.

Application Security (AppSec)

Application Security is a broad and varied space – for more detail on some of specific subcategories, you can refer to a blog I published at Scale last year, and also this more recent piece on AI pentesting from Malika Aubakirova at a16z. For the sake of simplicity, I’ve bundled all of these categories under one umbrella here.

The most recent outcome in AppSec that I continue to marvel at is Chainguard: a container security company recently valued at $3.5B and soon-to-be member of the $100M ARR club. Four years ago, when most of the AppSec ecosystem was focused on aggregating and prioritizing findings from code scanners, Chainguard stood alone in trying to do something ambitious and different: solve the root cause of the alert sprawl problem. They rightly identified that most of the noise produced by these scanners were from vulnerable dependencies installed in container base images, so if you just did the work to offer “clean” versions of these images, then the scanner findings would plummet to near zero. 

While Chainguard brought this idea to market in 2021, new companies like Echo, Minimal, and Minimus are working to deliver AI-native solutions in the space. Practically, this means that enterprises can now (1) build clean images on their current operating systems rather than re-platform onto Chainguard’s Wolfi images, and (2) serve AI code-generating agents in addition to human devs. In a future where most software is written by AI, code-generating agents can operate more securely by downloading whatever images and dependencies they need from these trusted sources, ad-hoc and at runtime.

While the above can be classified as security measures on the “shift left” side of the spectrum, there is also tremendous innovation happening on the “shift right” side of things: web application pentesting. Companies like Mindfort, RunSybil, Terra Security, XBOW, and more (see here) are delivering sophisticated agents that use classical pentesting tools and techniques to breach web applications. In the future, these tools can be integrated into CI/CD to ensure that no application is shipped to production without having first passed the pentest. Where prior, consultative pentesting engagements would happen only periodically and to satisfy vanity compliance requirements, on-demand AI platforms can run continuously and deliver real security value. Over time, these companies can also expand from just servicing web applications to also performing lower-level, network-layer pentesting as well.

Identity

Identity is an incredibly deep rabbit hole that I won’t be exploring in-depth here; for that, please refer to this article from my friend Rak Garg, Partner at BCV. With that said, you can generally segment the identity market by the “type” of identity that each platform serves. For example, Okta’s flagship product is an identity directory for internal corporate employees, whereas Auth0’s is for external customers. The through-line in these categories is that each identity platform has historically represented real-life human beings. However, in the last few years we’ve seen an explosion of interest in managing “non-human identities” – e.g. machine infrastructure, service accounts, and yes, AI agents. These all present new types of identities that now need to be inventoried, authenticated, and authorized. Similarly, the systems (or tools) that these new identities interact with need additional infrastructure to share resources more securely.

New companies like Keycard, Arcade and Hush Security have emerged recently to help tackle this problem from a variety of angles. Depending on your role in the ecosystem – e.g. whether you are an agent builder, a customer of these agents, or a resource owner – there are new challenges for you to consider:

  1. Agent Builders: There are typically new, AI app startups that are building and selling agentic products: e.g. companies like Decagon, Sierra, Glean, Traversal, Dropzone, and more. These companies are all building agents that need access to resources owned by their customers. For example, Decagon may need access to their customer’s CRM; Traversal may need to access the same customer’s Datadog telemetry, and so on. For these agent builders, they need tooling to securely gain credentials and access these resources from within their customer’s environments.
  2. Agent Customers: These can be any of the thousands of enterprises now using agentic software that accesses internal resources. For example, a quick look at Glean’s website shows customers like Samsung, Citi, Bill.com, Instacart, and more. These are all massive enterprises that are purchasing agentic software and will now need to (a) monitor these identities, and (b) manage their permissions and behaviors across various internal resources.
  3. Resource Owners: This is any company that would like their product to interact nicely with agents. I like to think of this as any company that has an API. Companies that have APIs care about how outside developers engage with their products, and invest accordingly to make elegant developer experiences while also enforcing security controls like API keys, authentication, and authorization. These same companies are now making similar investments to make their products agent-friendly – as evidenced by the growth in MCP servers since the protocol’s release in November. At the time of this writing, there are already ~4,700! Given that MCP has well-known security gaps, these companies may need outside tooling to help safely expose resources to agents on behalf of their customers.

In addition to the companies outlined above, companies like Twine Security are building AI agents that run your human identity operations in-house. For enterprises that are wrestling with problems solved by IGA, PAM, ITDR, and more, instead of throwing headcount and point solutions at the problem, you can now deploy AI agents to solve these issues for you. In the future, we can envision a world where 1-2 platforms give you all of the visibility and controls you need across human, customer, and agent identities.

Digital Risk Protection (DRP)

Digital Risk Protection (DRP) tools help enterprises protect their brand, image, and reputation from malicious impersonation on the Internet. For example, if an attacker purchased <chem1stry.vc>, where our actual domain is <chemistry.vc>, and used this domain to impersonate our brand, we would certainly want to know. In addition to domain spoofing, attackers can also target you on social media, where they can impersonate your brand, or organize swarms of bot accounts that use AI to produce malicious content. 

The most notable player in this category is Zerofox: a company whose last public filing showed $188M in ARR, before being taken private for $350M in April 2024. The <2x revenue multiple is not only an indication of their slow growth, but also of the service-heavy nature of their business. 

Given (a) the opportunity created by the acquisition of Zerofox; (b) the general tailwind of AI-generated deepfake attacks; and (c) the opportunity to use AI to counter those attacks scalably with software, the market has responded with a number of new, AI-native DRP platforms like Outtake and Doppel. These platforms leverage the latest advancements in AI to crawl the deepest corners of the Internet, interpret malicious behavior, and automate mitigation strategies. Many of these platforms are also expanding to other vulnerable channels like email, SMS, and voice. While attackers are moving quickly to adopt AI in their attack patterns, these platforms enable enterprises to finally “fight AI with AI,” even outside of their four walls.

The above is an non-exhaustive look at just a few of the cybersecurity fields being totally reinvented by AI. We imagine that in less than 6 months, the same market map would require several new columns and dozens of new logos. But already, we’re seeing the outlines of what the next generation of AI-native cybersecurity companies could look like.

If you’re a founder, CISO, or researcher working at the intersection of AI and security — I’d love to hear from you. Drop me a line at shayan@chemistry.vc.

Authors