Nomadic, a San Francisco-based AI platform for processing autonomous vehicle and robotics video data, has raised $8.4M in seed funding led by TQ Ventures. The platform uses vision language models and agentic reasoning to transform raw video archives into structured, searchable datasets for edge case discovery and model training. The capital will fuel product development, hiring, and European market expansion.
Physical AI Investments Surge
Nomadic's raise aligns with a wave of physical AI funding, including Rhoda AI's $450M Series A in March 2026. Physical Intelligence is reportedly negotiating a $1B round at $11B valuation shortly after. These moves, alongside NVIDIA's Physical AI Data Factory blueprint announced at GTC 2026, underscore demand for real-world data tools amid robotics scaling (per TechCrunch). Nomadic's focus on video reasoning positions it to capture this momentum.
Petabytes of Video Go Unused
Autonomous vehicle and robotics teams collect petabytes of fleet footage annually, but most remains unanalyzed due to manual review bottlenecks. Finding rare edge cases like unexpected turns or failures takes weeks, slowing model iteration. Current labeling tools from players like Scale AI and Encord rely on detection without validation, leaving teams distracted from core development (per SiliconANGLE). Nomadic targets this gap with automated reasoning over long-horizon videos.
Agentic Reasoning Unlocks Video Value
Nomadic's platform applies vision language models for natural language queries on video archives, surfacing hypotheses-tested events with over 95% accuracy. Unlike frame-level labeling, it performs agentic validation—treating detections as hypotheses to confirm via multi-frame reasoning. Customers including Zoox, Mitsubishi Electric, Zendar, and Natix Network already use it at production scale for fleet monitoring and training data curation (per TechCrunch).
As Nomadic CEO Mustafa Bal explained:
"Juggling around terabytes of video, slamming that against hundreds of 100 billion-plus parameter models… is really insanely difficult."
This approach yields 35% performance gains from just 500 minutes of curated data and 68% better motion detection than frontier models.
AI Specialists Validate Data Infra Bet
TQ Ventures led the round, drawn to Nomadic's first-principles approach to physical AI data moats, with participation from Pear VC, BAG Ventures, Predictive VC, Google DeepMind's Jeff Dean, and Cognition's Scott Wu. TQ's portfolio in AI infra like Pathway and MindsDB signals conviction in Nomadic as category-defining infrastructure (per investor sites). Pear VC adds AV expertise via Aurora. Angels like Dean provide technical credibility for scalable ML processing.
As TQ Ventures' Schuster Tanger noted:
"The second an autonomous vehicle company tries to build Nomadic internally, they’re distracted from what makes them win."
Physical AI Market Scales Rapidly
The physical AI market stands at $5.4B in 2025, projected to reach $61B by 2034 at 31% CAGR. Robotics and AV sectors drive this, needing tools to convert fleet data into training signals amid regulatory tailwinds like the SELF DRIVE Act. Nomadic competes with Encord and Labelbox but differentiates via reasoning over petabyte-scale archives.
Harvard Duo Builds ML Foundations
Co-founders Mustafa Bal (CEO) and Varun Krishnan (CTO), Harvard CS alums, bring complementary expertise. Bal contributed to Microsoft's DeepSpeed and ONNX Runtime, accelerating GPT models, while at Snowflake on Cortex AI. Krishnan built Lyft's real-time driver positioning ML and optimized at Flock Freight. Their track record in ML infra and applied optimization fits video intelligence for AV fleets (per LinkedIn).
GTC Win Fuels Global Push
Post-funding, Nomadic won first place at AWS AI Pitch at NVIDIA GTC 2026, earning $25K credits. It partnered with Natix Network for global data access and debuted in Europe at Tech AD Berlin, sparking talks with BMW, Porsche, and Audi. Hiring targets ML staff and infra engineers to scale for millions of footage hours.
