Kana raised $15M seed led by Mayfield for agentic AI marketing platform. SF-based startup uses AI agents for precision audiences, AEO, real-time analytics, and campaign optimization targeting DTC brands, retailers, and agencies. (147 characters)

MatX Raises $500M Series B for LLM Chips

MatX raised $500M Series B led by Jane Street and Situational Awareness LP for high-throughput LLM chips. Delivers highest throughput, >2000 tokens/sec on large MoE models, targeting frontier AI labs.

Emel Kavaloglu

Feb 25, 2026

MatX, a designer of high-throughput chips optimized for large language models, has raised $500M in Series B funding led by Jane Street and Situational Awareness LP. MatX One provides the highest throughput and lowest latency for training, RL, prefill, and decode workloads on large mixture-of-experts and dense models. The capital accelerates MatX One toward tapeout in under a year.

Frontier Workloads Fuel Series B

This raise builds on MatX's Series A from March 2025. Frontier AI labs demand hardware tuned for massive LLMs, prioritizing volume over versatility. MatX sacrifices small-model support to excel on high-scale MoE workloads.

GPUs Falter on MoE Decode Phases

Existing GPU architectures incur high latency in KV cache handling for long-context inference. Interconnect limitations hinder efficient scale-out across hundreds of chips for trillion-parameter models. These gaps slow iteration cycles for leading AI developers.

Splittable Arrays Maximize Compute Density

MatX One features a splittable systolic array that partitions flexibly for MoE topologies. It achieves the highest FLOPS per mm² through specialized design. Direct hardware control optimizes for evolving LLM paradigms.

SRAM Weights HBM KVs Cut Latency

SRAM stores weights for sub-nanosecond access, unlike uniform HBM in GPUs. HBM dedicates to KV caches, supporting extended contexts without performance cliffs. Superior scale-up/out interconnects reduce communication overhead by design.

Quant Capital Signals Hardware Bet

Jane Street, expert in high-performance computing for trading, co-leads with Situational Awareness LP. Backers like Spark Capital, Nat Friedman and Daniel Gross's fund, and Andrej Karpathy add AI domain conviction. Investors validate MatX's first-principles rethink of LLM silicon.

Custom ASICs Target Frontier Labs

Frontier AI labs increasingly pivot to tailored chips for competitive edge in model scale. MatX's >2000 output tokens per second on 100-layer MoE models highlights the performance delta. Supply chain ties to Alchip and Marvell pave production path.

Tapeout Accelerates with Partners

MatX One tapeout targets completion within the year. The 100-person team leverages partners Alchip for fabrication and Marvell for interconnects. This positions MatX to ship hardware amid intensifying AI compute race.

TAMradar monitors companies, people, and industries so you never miss important updates - tracking funding rounds, new hires, job openings, and 20+ signals.

Request access to get insights like this via webhooks or email.

Request access →

Index