Skip to main content

AWS S3 Secrets Troubleshooting

This page is for troubleshooting help with AWS S3 secrets in MotherDuck. For more information on creating a secret, see: Create Secret.

Prerequisites

Before troubleshooting AWS S3 secrets, ensure you have:

  • Required: A valid MotherDuck Token with access to the target database
  • Required: AWS credentials (access keys, SSO, or IAM role)
  • Optional: DuckDB CLI (for troubleshooting purposes, though any DuckDB client will work)
  • Optional: AWS CLI (for bucket access verification)
note

AWS CLI PATH: If you installed AWS CLI manually, you may need to add it to your system PATH. Package managers like Homebrew (macOS) typically add it to PATH automatically. Verify with which aws (macOS/Linux) or where aws (Windows) - if it returns a path, you're all set!

Verify Secret Access

Check that the secret is configured

First, make sure you're connected to MotherDuck:

-- Connect to MotherDuck (replace 'your_db' with your database name)
ATTACH 'md:your_db';

Then type in the following:

.mode line
SELECT secret_string, storage FROM duckdb_secrets();

The output should look something like this. Make sure that the output string includes values for: key_id, region, and session_token:

secret_string = name=aws_sso;type=s3;provider=credential_chain;serializable=true;scope=s3://,s3n://,s3a://;endpoint=s3.amazonaws.com;key_id=<your_key_id>;region=us-east-1;secret=<your_secret>;session_token=<your_session_token>
note

If you see no results, it means no secrets are configured. You'll need to create a secret first using CREATE SECRET.

If your output is missing a value for key_id, region, or session_token, you can recreate your secret by following the directions for CREATE OR REPLACE SECRET.

If that output worked successfully, you can confirm you have access to your AWS bucket by running these commands in your terminal (not in DuckDB):

# Log into AWS by running:
aws sso login

# Check bucket access:
aws s3 ls <your_bucket_name>

Example Output:

PRE lambda-deployments/
PRE raw/
PRE ducklake/
2025-05-29 07:03:26 14695690 sample-data.csv
note

Understanding the output: PRE indicates folders/prefixes, while files show their size and modification date. If you only see PRE entries, your bucket contains organized data in folders. To explore deeper, use aws s3 ls s3://<bucket-name>/<folder-name>/ or aws s3 ls s3://<bucket-name>/ --recursive to see all files.

Configure permissions in AWS

This is an example of an IAM policy that will allow MotherDuck to access your S3 bucket. Note: if you use KMS keys, the IAM policy should also have kms:Decrypt in AllowBucketListingAndLocation.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketListingAndLocation",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::your_bucket_name"
]
},
{
"Sid": "AllowObjectRead",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::your_bucket_name/*"
]
}
]
}

AWS Credential Chain

MotherDuck automatically finds your AWS credentials using AWS's credential chain. This is the recommended approach, as it uses short-lived credentials (typically valid for 1 hour), which are more secure and reduce the risk of credential leakage. For most users, it works seamlessly with your existing AWS setup.

Most Common: AWS SSO

If you use AWS SSO (like most users), run:

aws sso login

To create a secret using the credential chain, run:

CREATE OR REPLACE SECRET my_secret IN MOTHERDUCK (
TYPE s3,
PROVIDER credential_chain,
CHAIN 'env;config' --optional
);

Other Credential Types

The credential chain also works with:

  • Access keys stored in ~/.aws/credentials
  • IAM roles (if running on EC2)
  • Environment variables

Advanced: Role Assumption

note

Only needed for: Cross-account access, elevated permissions, or when you need to assume a different role than your current profile.

If you need to assume a specific IAM role, create a profile in ~/.aws/config:

[profile my_motherduck_role]
role_arn = arn:aws:iam::your_account_id:role/your_role_name
source_profile = your_source_profile

Then create a secret that uses this profile:

CREATE SECRET my_s3_secret (
TYPE S3,
PROVIDER credential_chain,
PROFILE 'my_motherduck_role',
REGION 'us-east-1' -- Use your bucket's region if different
);

Common Challenges

Scope

When using multiple secrets, the SCOPE parameter ensures MotherDuck knows which secret to use. You can validate which secret is being used with the which_secret function:

SELECT * FROM which_secret('s3://my-bucket/file.parquet', 's3');

Periods in bucket name (url_style = path)

Because of SSL certificate verification requirements, S3 bucket names that contain dots (.) cannot be accessed using virtual-hosted style URLs. This is due to AWS's SSL wildcard certificate (*.s3.amazonaws.com) which only validates single-level subdomains.

If your bucket name contains dots, you have two options:

  1. Rename your bucket to remove dots (e.g., use dashes instead)
  2. Use path-style URLs by adding the URL_STYLE 'path' option to your secret:
CREATE OR REPLACE SECRET my_secret (
TYPE s3,
URL_STYLE 'path',
SCOPE 's3://my.bucket.with.dots'
);

For more information, see Amazon S3 Virtual Hosting documentation.

What's Next

After resolving your AWS S3 secret issues: