MotherDuck Now Speaks Postgres: Fast Analytics Without Changing Your Stack
2026/04/21TL;DR: MotherDuck's Postgres endpoint lets any Postgres-compatible client, driver, or BI tool query your MotherDuck database directly without a DuckDB library. Louisa Huang and Garrett O'Brien walk through the architecture, run live demos, and share connection pooling tips.
What the Postgres endpoint actually is
The Postgres endpoint is a translation layer that makes MotherDuck speak the Postgres wire protocol. Your application connects with a standard Postgres driver like node-postgres or JDBC, but execution still happens server-side on MotherDuck using DuckDB SQL. You keep your existing connection pooler and query patterns; MotherDuck handles the analytical work.
Why this matters if you're running analytics on Postgres
If your analytical queries are competing with transactional traffic on the same Postgres cluster, this gives you a way to split them apart. Analytical workloads route to MotherDuck through the same Postgres drivers your app already uses, so your Postgres instance stays lean. It also works in runtimes that couldn't load a DuckDB client before: Cloudflare Workers, Vercel Serverless Functions, AWS Lambda.
Live demo: NYC taxi dashboard on Vercel
Louisa builds a working NYC taxi dashboard with Next.js, Vercel, and node-postgres pointed at a free MotherDuck account on a Pulse instance. Every chart fires a query through the Postgres endpoint, and the dashboard stays responsive while aggregations run across millions of rows.
Connection pooling and read scaling
Connection pooling works the same way it does with a real Postgres database. Open connections at server start, reuse them. A pool size of 10 to 100 handles most applications. If you need higher concurrency, swap your regular MotherDuck token for a Read Scaling token. Each pooled connection gets routed to a separate ducklink, so you get horizontal scaling without changing application code.
When you still want the DuckDB client
The Postgres endpoint makes sense for embedded analytics, BI integrations, and serverless workloads. For exploratory analysis, bulk ingestion with the Appender API, or anything that benefits from DuckDB-Wasm in the browser, the DuckDB client is still the better fit. The MotherDuck getting started guide walks you through setting up a free workspace to try both.
FAQS
What is the MotherDuck Postgres endpoint?
The Postgres endpoint is a translation layer that lets MotherDuck speak the PostgreSQL wire protocol. You connect with any Postgres-compatible driver like node-postgres, JDBC, or rust-postgres, and MotherDuck handles the type and metadata conversion. Queries still run on MotherDuck's server-side DuckDB instances, so you get DuckDB performance without replacing your existing Postgres tooling.
Can I use my existing Postgres client with MotherDuck?
Yes. Popular drivers like node-postgres, JDBC, rust-postgres, and psycopg all work with the endpoint. You point your client at pg.<region>.motherduck.com on port 5432, use your MotherDuck token as the password, and connect over SSL. The wire protocol is Postgres, but the SQL dialect is DuckDB's, which is mostly compatible with Postgres SQL and has some of its own extensions.
Does the MotherDuck Postgres endpoint support dbt?
Not yet. dbt support through the Postgres endpoint is actively in development. In the meantime, you can use dbt with MotherDuck through the dbt-duckdb adapter, which connects via the DuckDB client.
How should I handle connection pooling with the Postgres endpoint?
Use a connection pool the same way you would with a real Postgres database. Initial connections take a few hundred milliseconds for the TCP and TLS handshake and token authentication, so reusing pooled connections removes that overhead on subsequent queries. A pool size of 10 is reasonable for most applications; 100 covers higher-scale workloads. There is no hard connection limit, but standard Postgres pooling best practices apply.
Should I use the Postgres endpoint or the DuckDB client?
Use the Postgres endpoint when you're building customer-facing applications, connecting BI tools, or running in serverless environments like Cloudflare Workers or Vercel that can't load native DuckDB libraries. Use the DuckDB client for exploratory analysis, bulk data ingestion using the Appender API, or when you want DuckDB-Wasm in the browser for ultra-low-latency interactions.
Related Videos

60:41
2026-04-09
Zero-Latency Analytics in Your Application with Dives
BI tools were never built to be app interfaces — they're rigid, clunky, and add complexity to your user experience. Dives offer a different approach: interactive data apps you create with natural language that can be embedded directly into your applications. In this session, Alex Monahan walks through how to build a Dive and embed it in your app, with help from AI agents. You'll learn how to create interactive visualizations with natural language queries, embed Dives into your app with a secure sandbox, set up the auth flow so your users get read-only access without exposing credentials, and choose between server-side and dual execution with DuckDB-Wasm. Whether you're building customer-facing analytics or internal tools, this webinar shows you the full workflow from query to production-ready embed. See how Claude Code + Dives can get you from zero to a working data app fast.
Webinar
MotherDuck Features
BI & Visualization
App Development

58:00
2026-04-07
What's New in DuckDB 1.5!
Alex Monahan and Jacob Matson walk through DuckDB 1.5's biggest features including the VARIANT type, redesigned CLI, built-in GEOMETRY, and faster joins.
Webinar

55:29
2026-04-02
All About DuckLake: The O'Reilly Book
Matt Martin and Alex Monahan, coauthors of O'Reilly's DuckLake: The Definitive Guide, discuss what DuckLake is, how it simplifies lakehouse architecture, and why it replaces file-based metadata with a SQL database. Includes a live Q&A and the release of Chapter 1.
Webinar
Ecosystem
