General Analysis Raises $10M Seed for Agentic AI Security

General Analysis raised $10M seed led by Altos Ventures for agentic AI security. Provides red-teaming, inventory, and guardrails against multi-step exploits for enterprises like NVIDIA and DeepMind.

Emel Kavaloglu

General Analysis Raises $10M Seed for Agentic AI Security

General Analysis, a provider of security layers for agentic AI systems, has raised $10M in seed funding led by Altos Ventures. The platform delivers automated red-teaming, asset inventory, vulnerability forecasting, and runtime guardrails to uncover multi-step exploits and safeguard enterprise AI agents in production. The funds will support team expansion and product development.

Agentic Security Funding Accelerates

This round arrives as agentic AI draws intense investor focus. Trent AI emerged from stealth with $13M seed in April 2026 for AI agent lifecycle security. Lakera previously raised $30M for real-time GenAI defenses, while Protect AI secured $75M+ targeting MLSecOps. General Analysis differentiates through full-stack agent pipeline testing.

Multi-Step Exploits Evade Static Checks

Agentic systems introduce novel risks like prompt injections and tool graph manipulations. General Analysis research showed an adversarial agent tricking 50 of 55 customer service bots into offering over $10M in fake perks. Another finding exposed Supabase MCP vulnerabilities leaking entire SQL databases. These empirical demonstrations reveal gaps in traditional security.

Automated Red-Teaming Maps Attack Paths

The platform starts with AI security asset management, inventorying models, knowledge bases, and agent pipelines for injection risks. Its core is context-aware automated red-teaming, simulating multi-step exploits across tool graphs. Runtime guardrails then enforce controls derived from these findings, monitoring for poisoning and drift.

As co-founder Rez Havaei told TechStartups:

"We hear from security teams that they want agents that are secure by design… The problem is that feeling safer and being safer are not the same thing."

This empirical approach outperforms static methods by measuring real failure rates.

Empirical Testing Defines Differentiation

Open-source GA Guard models support long-context moderation up to 256k tokens and detect jailbreaks, PII leaks, and more, outperforming cloud providers with F1 scores up to 0.893. The suite garnered 42k+ downloads in its first week. Customers already include NVIDIA, Jane Street, DeepMind, Cohere, Snap, NASA, Caltech, Harvard, and CMU.

Co-founder Maximilian Li emphasized:

"Our position is that security for AI systems is an empirical problem… You can only measure how often it fails."

Altos Leads Elite AI Security Bet

Altos Ventures partner Tae Yoon highlighted the shift: "Agentic systems represent a paradigm shift in security." Participants include 645 Ventures, Menlo Ventures, and Y Combinator. This backer mix signals conviction in scaling empirical defenses for enterprise AI deployments amid rising threats.

AI Security Market Scales Rapidly

The AI security platforms market stands at $4.3B in 2026, projected to reach $31.2B by 2036 at 22% CAGR. 87% of security teams prioritize agentic AI adoption, per Ivanti research. Bessemer Venture Partners calls securing AI agents the defining cybersecurity challenge of 2026.

Competitors like Prompt Security ($23M) and Mindgard ($8M) focus on narrower GenAI protections, leaving room for General Analysis' agent-centric platform.

Founders Bring Deep AI Safety Expertise

CEO Rez Havaei brings AI adversarial testing from Cohere and NVIDIA, plus quant trading at Jane Street and founding vals.ai. Co-founder Max Li offers AI safety research from Redwood Research and Haize Labs, with Jane Street quant experience. Rex Liu contributed agent engineering at Google DeepMind and 20+ zero-days at HackerOne. This trio combines top pedigrees from CMU, Harvard, Caltech.

Revenue Ramp and Hiring Ahead

The company eyes $2M revenue within 12-18 months ahead of Series A. Post-funding, General Analysis is hiring engineers and researchers to secure agentic AI. Early traction includes partnerships like Together AI and open-source tools like the Jailbreak Cookbook.

TAMradar monitors companies, people, and industries so you never miss important updates - tracking funding rounds, new hires, job openings, and 20+ signals.

Request access to get insights like this via webhooks or email.

Request access →

Index