Building a Data Platform for the Agent Era at AheadComputing
Every meeting now is shifted from trying to find answers to what to do about them. Users are able to make a Dive or chat with MotherDuck through Claude, find new design issues, and ask: what should we do about it?
How a RISC-V CPU startup built an agent-first data platform on MotherDuck, achieving 74% engineer adoption, cutting dashboard request times from 3 months to 90 minutes, and building a culture of data-driven hardware development.
AheadComputing is building a high-performance RISC-V CPU from scratch, founded by former Intel architects with over 100 years of combined CPU design experience. They're doing something most hardware companies don't: treating the data platform as a first-class engineering discipline, and building it agent-first around MotherDuck's MCP, Dives, and local dev workflows from the start.
- 74% of engineers onboarded onto MotherDuck within the first weeks of rollout
- 3 months → 1.5 hours for dashboard requests, replaced by self-service Dives
- 1-2 hour data query wait times eliminated—engineers now do exploratory analysis directly with the MotherDuck MCP server
- 1 week to production vs. 2-3 weeks for Snowflake—with more functionality
AheadComputing runs more than 10,000 chip simulations per day. As a project matures, testing shifts to performance convergence—full workloads, entire Linux boot flows, billions of instructions at a time—and the data grows by orders of magnitude. Hardware companies have never built for this. Tooling grew as proprietary software in isolation from the broader ecosystem. When a project needs analytics, the typical answer is an ad hoc solution, abandoned when the project ends.
Ben Holtzman came to AheadComputing from Intel, where he'd lived this firsthand. "In the past, the thing that was always asked of us was: do some sort of historical analysis on this problem. And the issue was that data never existed."
Before MotherDuck, a single data engineer supported roughly 100 hardware engineers. Anyone who needed to query data waited 1-2 hours for a response. But the deeper problem wasn't speed—it was that hardware engineers don't write SQL. "They're focused on how to make SQL a more performant workload at a lower level of abstraction than actually writing it." For AI assistants to be useful to these engineers, the domain data had to exist and be accessible through Claude Code and MCP, not a BI tool.
Agents, not BI, as the first-stop Data Interface
The design enablement team's philosophy has four pillars: use an AI assistant for all engineering tasks, lean into MCP for every tool and data source, make data access seamless, and don't get in the way of local development. Claude Code sits in the middle, serving analytics to hardware engineers in a familiar interface.
When hardware engineers want to understand historical trends or compare a new study to past results, they ask Claude Code—and Claude Code queries MotherDuck through MCP to answer. The data warehouse is an MCP endpoint, not a dashboard they have to navigate to.
When Ben and the design enablement team started evaluating tools, AI integration was a P0. They started with Snowflake, but found that Snowflake's AI story runs through Cortex: a proprietary agent layer that limits model access and requires separate MCP endpoints for each tool type. Natural language SQL is one agent. Documentation queries are another, and so on. "For us, that's a headache because now we can't just tell someone to install one thing and have it work."
And because Cortex agents are tied to whatever model Snowflake has integrated, they weren't on day one of a new release. "A few weeks back, a new Claude model was released. That same morning we were using Opus 4.6 with the MotherDuck MCP and getting results back. We weren't yet able to use Opus 4.6 in Snowflake itself."
AheadComputing evaluates every new model as it ships. They can't commit to a platform that makes that decision for them.
MotherDuck's MCP offered simplicity alongside the latest models. One setup by the design enablement team, and every engineer who authenticates into Claude—web, desktop, or Claude Code—automatically has access to the entire data warehouse. No per-user provisioning, and no separate agent to configure. This is what made 74% adoption possible in weeks, not months: analytics shipped to 121 engineers as a platform capability, not a tool each person had to install. And because it's standard MCP, any model works, the day it ships.
The Stack: Local DuckDB, MotherDuck, Agents in the Middle
The architecture reflects how hardware teams actually work—exploratory and local-first, but needing a shared historical record.
Chip simulation results flow through a Python data pipeline into Google Cloud Storage as Parquet, then into MotherDuck via DuckDB transforms and dbt.
Local DuckDB instances sit in notebooks alongside whatever data the hardware engineers are already generating: H5 files, data frames, CSVs. Their local workflows don't change. When they want to compare a fresh study to historical baselines, they do it with agent-assisted SQL—local DuckDB for the experiment, MotherDuck for the history.
The semantic layer is just markdown—an org-level Claude skill defining table relationships, metric definitions, and domain terminology that Claude Code loads automatically. Hardware domain knowledge doesn't exist in any public training data. A term like "latest release model" needs to map to a specific Unix path. Markdown is portable, version-controlled, and works across every model and interface.
MotherDuck Dives—interactive data apps that engineers create through natural language in Claude Code—added to the team's operating model. The old workflow: spec a dashboard, wait for the design enablement team to build it, review, revise, wait again. The new workflow: an engineer explores data through MCP, builds a Dive in Claude Code, and a Claude skill deploys it to the team's internal production site.
"As soon as we had Dives, we just decided to make it more seamless to go from prototyping to deploying. And we removed ourselves from being the bottleneck."
Hardware Engineers Building Dashboards
One of the most exciting use cases has been unblocking AheadComputing's CI merge queue. The team used Claude Code and MotherDuck to build Dives visualizing test run schedules and license usage, exposing bottlenecks that cut CI wait times from 8 hours to 1-2 hours. When verification engineers reported volume regression failures, the team used the MCP to dig into raw SlurmDB data and found the root cause—a test generation bug—in a single session.
But the power team's story is the clearest proof of the model shift. Oriah Halamish, a power and performance engineer who had joined from Intel five months earlier, watched Ben demo a power Dive built live with Claude. He tried it himself. Within 1.5 hours, he had his first working dashboard—something that had previously required three months of back-and-forth between requirements and engineering. "They got it Claude-enabled and that changed everything," said Christopher Hules from the TFM team. Oriah now updates Dives in real time during meetings—people ask questions, he writes them down, and within an hour has an updated dashboard. The team maintains a central analytics catalog where every Dive across every team is linked and shared in weekly design reviews.
The shift has become cultural. "Every meeting now is shifted from trying to find answers to what to do about them. Users are able to make a Dive or chat with MotherDuck through Claude, find new design issues, and ask: what should we do about it?" Ben said. Other teams followed: the backend team got persistent management reporting, hardware benchmarking data migrated to MotherDuck for self-service analysis in Claude Code, and Ben built a job queue simulator that uploads to MotherDuck automatically and updates a live Dive his manager opens for resourcing decisions.
"A single license for a power analytics tool for six months could be $50,000. There are basically only three companies that create this vendor tooling. You're operating in a space with a virtual monopoly." MotherDuck offered a different way.
"We're converging on a pattern of fast, agent-driven, software-style iteration loops," said Alon Mahl, Vice President of Design Verification. "That's been a big shift: bringing that level of velocity and self-serve, data-driven culture into hardware, which historically just hasn't operated that way."