DuckDB 1.3 Lands in MotherDuck: Performance Boosts, Even Faster Parquet, and Smarter SQL
2025/06/01 - 5 min read
BYWe’re excited to share that DuckDB 1.3.0 is now available in MotherDuck, bringing a wave of performance and usability upgrades to make everyday SQL and analytics faster, friendlier, and more efficient.
A major release, DuckDB 1.3.0 improves performance in real-world scenarios with faster queries, updated SQL syntax, and smarter handling for Parquet files.
Read on for our favorite highlights from this release.
Even Better Real-World Query Performance
A New TRY() expression for safer queries
If you’re ingesting messy data sources or writing resilient data pipelines, the TRY ()
function offers more graceful handling for bad data by returning NULL
values instead of errors on problematic rows.
Pushdown of inequality conditions into joins
A huge win for incremental dbt models and other workloads that rely on join conditions, DuckDB and MotherDuck users can expect much better performance when filtering.
Pushdown of arbitrary expressions into scans
DuckDB can now push down more types of filter expressions directly into scans, reducing the amount of data that needs to be processed downstream to deliver up to 30X faster queries in these scenarios.
Blazing Fast Parquet Reads and Writes
With DuckDB 1.3.0, Parquet files are more efficient overall. While Parquet reads are even faster thanks to optimizations around caching, materialization, and read performance, Parquet writes are also faster due to a smarter use of multithreaded exports, improved compression mechanisms, and rowgroup merges.
Late materialization
DuckDB now defers fetching columns until absolutely necessary, resulting in 3–10x faster reads for queries with LIMIT
.
~15% average speedup on reads
General read performance is significantly improved due to new efficiency scan and filter improvements, even without late materialization.
30%+ faster write throughput
Major improvements to multithreaded Parquet export performance result in even faster writes.
Better compression for large strings
Large strings can now be dictionary-compressed, resulting in reduced file sizes and performance boosts.
Smarter rowgroup combining
Smaller rowgroups from multiple threads are now merged at the time of write, resulting in more efficient Parquet files.
Performance Wins Big and Small
The release of 1.3.0 isn’t just about headline features: It also includes performance boosts across the stack, from aggregations and string scans to CTEs, smarter algorithms, lower memory usage, and better parallelism.
Here are 12 performance highlights that caught our attention:
-
2x faster Top-N for large
LIMIT
queries: If you’re working with up to 250K rows, Top N is now faster than sorting! -
3x fewer memory allocations in aggregations: Improvements to string hashing and aggregation internals reduce memory pressure and lower contention, leading to more efficient execution of queries like
COUNT(DISTINCT)
at scale. -
~25% faster performance for large hash table creation: The parallelism strategy has been refined to avoid excessive task splitting, leading to better memory access patterns and faster hash table initialization during large joins.
-
20x faster
UNNEST
andUNPIVOT
for small lists: DuckDB now processes multiple lists at once and eliminates unnecessary copying to deliver better performance for common patterns like unpivoting a few columns. -
30–40% faster
RANGE
based window functions: Parallelized task processing across hash groups and reduced lock contention during execution now lead to smoother, more efficient performance. -
7x faster conversion to Python object columns: Optimized Python object conversion due to skipping intermediate steps to speed up performance for object columns and scalar UDFs.
-
5–25% faster LIKE '%text%' and CONTAINS string scans: Unified and optimized DuckDB’s implementation using
memchr
for early match detection to speed up substring searches across the board. -
Faster list-of-list creation: Improved performance when constructing nested lists, boosting speed for transformation pipelines that rely on complex list structures.
-
Reduced memory contention in hash joins: Introduced parallel
memset
for initializing large join tables, eliminating single-threaded bottlenecks and improving performance on multi-core systems. -
Faster recursive CTEs and complex subqueries performance: Adopted a new top-down subquery decorrelation strategy, unlocking better optimization for nested queries and improved performance for recursive CTEs.
-
Improved performance and support for JSON-heavy queries: More parallelism in
UNION ALL
and resolution of multiple JSON edge cases, for better handling. -
Faster decoding of short FSST compressed strings: Optimized decoding for inlined strings by skipping unnecessary copying, resulting in ~15% speedups without performance regressions on longer strings.
All these optimizations add up to one thing: even faster queries without lifting a finger.
What This Means for MotherDuck Users
If you're using MotherDuck, DuckDB 1.3 is already live. Your dbt models, dashboards, and notebooks will feel snappier right away.
While you can continue using your current version of DuckDB, we encourage you to upgrade your DuckDB clients to 1.3.0 as soon as you can to take advantage of the fixes and performance improvements.
Curious what version you’re on? Run this simple query to take a look:
Copy code
SELECT version();
Huge Thanks to the DuckDB Team
At MotherDuck, we’re proud to support the best of DuckDB’s powerfully efficient query engine as a managed cloud service so you can easily manage a fleet of DuckDB instances and collaborate with your team. DuckDB 1.3.0 wouldn’t be possible without the incredible engineering work from the DuckDB team and contributors from the broader community and ecosystem.
If you have feedback or questions, join our Community Slack or reach out directly in the MotherDuck UI or online. We’re eager to hear your feedback so we can help you move faster from question to insight and build a ducking awesome product that best supports your workflow.
Happy querying - let’s get quacking!
CONTENT
- Even Better Real-World Query Performance
- Blazing Fast Parquet Reads and Writes
- Performance Wins Big and Small
- What This Means for MotherDuck Users
- Huge Thanks to the DuckDB Team
Start using MotherDuck now!
