---
sidebar_position: 2
title: Hypertenancy
description: Learn how MotherDuck's hypertenancy model provides dedicated compute for every user through per-user Ducklings, enabling predictable performance without noisy neighbors.
---

MotherDuck implements a unique tenancy model called **hypertenancy**: every user or service account gets their own dedicated DuckDB compute instance, called a Duckling. Unlike traditional data warehouses where all users share a single cluster, hypertenancy provides full compute isolation at the individual user level.

## The Problem with Traditional Multi-Tenancy

Traditional data warehouses and OLAP systems use a shared-compute model:

```mermaid
graph TB
    subgraph Users["All Users"]
        U1{{"User A"}}:::green
        U2{{"User B"}}:::green
        U3{{"User C"}}:::green
    end

    subgraph Warehouse["Shared Data Warehouse"]
        Cluster["Single Compute Cluster"]:::yellow
    end

    U1 --> Cluster
    U2 --> Cluster
    U3 --> Cluster
```

This shared model creates several challenges:

- **Noisy neighbors**: One user's expensive query affects everyone else's performance
- **Resource contention**: Concurrency limits apply across all users
- **Unpredictable performance**: Query times vary based on overall system load
- **Overprovisioning**: Resources must be sized for peak aggregate load, sitting idle most of the time
- **Difficult cost attribution**: Hard to track compute costs per user or customer

## How Hypertenancy Works

With hypertenancy, MotherDuck provisions a separate Duckling for each user:

```mermaid
graph TB
    subgraph Users["All Users"]
        U1{{"User A"}}:::green
        U2{{"User B"}}:::green
        U3{{"User C"}}:::green
    end

    subgraph MotherDuck["MotherDuck"]
        D1["Duckling A"]:::yellow
        D2["Duckling B"]:::yellow
        D3["Duckling C"]:::yellow
    end

    U1 --> D1
    U2 --> D2
    U3 --> D3
```

Each Duckling is a complete DuckDB instance with dedicated CPU, memory, and fast SSD spill space. This architecture delivers:

- **Perfect isolation**: No noisy neighbors—one user's workload never impacts another
- **Predictable performance**: Dedicated resources mean consistent query times
- **Independent scaling**: Each user's compute can be sized to their specific needs
- **Per-user billing**: Compute costs directly attributable to individual users
- **Fast cold starts**: Ducklings start in approximately 1 second

## Scaling with Hypertenancy

Hypertenancy supports both vertical and horizontal scaling, letting you match compute resources to actual demand.

### Vertical Scaling: Duckling Sizes

Each user's Duckling can be configured to different sizes based on their workload requirements:

| Duckling Size | Best For |
|---------------|----------|
| **Pulse** | Ad-hoc queries, read-heavy workloads, high-concurrency analytics |
| **Standard** | Core analytical workflows, ETL/ELT pipelines |
| **Jumbo** | Large-scale batch processing, complex joins |
| **Mega** | Demanding jobs with high data volumes |
| **Giga** | Largest and toughest batch workloads |

You can adjust Duckling size per user through the [MotherDuck UI](/about-motherduck/billing/duckling-sizes/#changing-duckling-sizes) or [REST API](/sql-reference/rest-api/ducklings-set-duckling-config-for-user/).

For example, in a customer-facing analytics scenario, you might provision:
- **Pulse** Ducklings for most customers running standard dashboards
- **Standard** or **Jumbo** Ducklings for enterprise customers with heavier workloads
- **Mega** or **Giga** Ducklings for batch data loading jobs

### Horizontal Scaling: Read Scaling

When a single user needs to handle many concurrent queries—such as a service account powering a customer-facing application—you can enable [read scaling](/key-tasks/authenticating-and-connecting-to-motherduck/read-scaling/). Read scaling provisions additional read-only Ducklings that share the same data but distribute query load:

```mermaid
graph TB
    subgraph App["Application Users"]
        E1{{"End User 1"}}:::green
        E2{{"End User 2"}}:::green
        E3{{"End User 3"}}:::green
        E4{{"End User 4"}}:::green
    end
    S1[Service Account]:::watermelon

    subgraph MotherDuck["MotherDuck (Customer X)"]
        RW["Read-Write Duckling<br/>(Data Loading)"]
        R1["Read Scaling Duckling 1"]
        R2["Read Scaling Duckling 2"]
    end
    S1 --> RW
    E1 --> R1
    E2 --> R1
    E3 --> R2
    E4 --> R2

```

Read scaling lets you serve hundreds or thousands of concurrent end users through a single service account while maintaining predictable performance.

## Hypertenancy Use Cases

### Customer-Facing Analytics

Hypertenancy is particularly powerful for [customer-facing analytics](/getting-started/customer-facing-analytics/). Each of your customers can have their own service account with isolated Ducklings:

- **Data isolation**: Each customer's data stays in their own database
- **Compute isolation**: One customer's workload never impacts another
- **Cache isolation**: Each customer's Duckling maintains its own cache, so cached query results and data remain private and predictable
- **Independent sizing**: Scale resources per customer based on their tier or needs
- **Predictable costs**: Bill customers accurately based on their actual compute usage

For a hands-on guide to building customer-facing analytics with per-customer service accounts, see the [Builder's Guide](/key-tasks/customer-facing-analytics/3-tier-cfa-guide/).

### Development and Production Pipelines

Service accounts enable clean separation between deployment environments. Each environment gets its own isolated compute:

| Environment | Service Account | Duckling Size | Purpose |
|-------------|-----------------|---------------|---------|
| Local/Dev | `dev-pipeline` | Pulse | Interactive development and testing |
| Staging | `staging-pipeline` | Standard | Pre-production validation |
| Production | `prod-pipeline` | Standard/Jumbo/... | Production workloads |

This separation ensures:
- Development experiments never impact production performance
- Each environment has appropriately sized compute
- Clear cost attribution per environment
- Easy rollback by switching service account credentials

### Data Warehouse & Data Pipeline Workloads

For data pipelines, you can assign dedicated service accounts to different stages of your data workflow. If you're using dbt you can run dbt models with different duckling sizes.

| Pipeline Stage | Service Account | Duckling Size | Workload Pattern |
|----------------|-----------------|---------------|------------------|
| Ingestion | `ingest-service` | Jumbo/Mega | Bulk data loading, high I/O |
| Transformation | `transform-service-standard` / `transform-service-jumbo` /  | Standard/Jumbo | dbt models, ETL jobs |
| Reporting | `reporting-service` | Pulse (read scaling) | Dashboard queries, read-heavy |

This pattern provides:
- **Workload isolation**: Heavy batch ingestion jobs won't slow down interactive reporting queries
- **Right-sized compute**: Each stage gets the Duckling size optimized for its workload
- **Cost visibility**: Track compute costs per pipeline stage
- **Independent scheduling**: Run ingestion during off-peak hours without affecting daytime analysts

### Analytics & Data Science

For internal analytics teams, hypertenancy means analysts and data scientists each get their own compute. A data scientist running a complex ML feature extraction job won't slow down an analyst building a quick dashboard.

## Why Single-Node Beats Distributed for Per-User Compute

Traditional distributed data warehouses use clusters with multiple nodes that coordinate to execute queries. This architecture introduces:

- Network latency between nodes
- Coordination overhead
- Data shuffling costs

For queries that operate on one user's data at a time (the common pattern in hypertenancy), single-node execution on a Duckling eliminates this overhead entirely. The result is often faster query performance and lower costs compared to distributed systems, especially for interactive analytics workloads.

DuckDB's efficient columnar execution, combined with MotherDuck's fast storage architecture, means queries can handle datasets larger than memory with minimal performance impact.

## Related Content

- **Learn about Duckling sizes**: [Duckling Sizes](/about-motherduck/billing/duckling-sizes/)
- **Configure read scaling**: [Read Scaling](/key-tasks/authenticating-and-connecting-to-motherduck/read-scaling/)
- **Build customer-facing analytics**: [Customer-Facing Analytics Overview](/getting-started/customer-facing-analytics/)
- **Set up per-customer service accounts**: [Service Accounts Guide](/key-tasks/service-accounts-guide/)
