S3 Storage

Amazon S3 and S3-compatible storage backend using aioboto3 for async operations. Supports AWS S3 and S3-compatible services like Cloudflare R2, DigitalOcean Spaces, MinIO, and Backblaze B2.

Note

Requires the aioboto3 package. Install with: pip install litestar-storages[s3] or pip install aioboto3

Configuration

class litestar_storages.backends.s3.S3Config[source]

Bases: object

Configuration for S3-compatible storage.

Supports AWS S3 and S3-compatible services like: - Cloudflare R2 - DigitalOcean Spaces - MinIO - Backblaze B2

Variables:
  • bucket – S3 bucket name

  • region – AWS region (e.g., “us-east-1”)

  • endpoint_url – Custom endpoint for S3-compatible services

  • access_key_id – AWS access key ID (falls back to environment/IAM)

  • secret_access_key – AWS secret access key

  • session_token – AWS session token for temporary credentials

  • prefix – Key prefix for all operations (e.g., “uploads/”)

  • presigned_expiry – Default expiration time for presigned URLs

  • use_ssl – Use SSL/TLS for connections

  • verify_ssl – Verify SSL certificates

  • max_pool_connections – Maximum connection pool size

Parameters:
bucket: str
region: str | None = None
endpoint_url: str | None = None
access_key_id: str | None = None
secret_access_key: str | None = None
session_token: str | None = None
prefix: str = ''
presigned_expiry: timedelta
use_ssl: bool = True
verify_ssl: bool = True
max_pool_connections: int = 10
__init__(bucket, region=None, endpoint_url=None, access_key_id=None, secret_access_key=None, session_token=None, prefix='', presigned_expiry=<factory>, use_ssl=True, verify_ssl=True, max_pool_connections=10)
Parameters:

Storage Class

class litestar_storages.backends.s3.S3Storage[source]

Bases: BaseStorage

Amazon S3 and S3-compatible storage backend.

Uses aioboto3 for async S3 operations with support for AWS S3 and S3-compatible services.

Example

>>> # AWS S3
>>> storage = S3Storage(
...     config=S3Config(
...         bucket="my-bucket",
...         region="us-east-1",
...     )
... )
>>> # Cloudflare R2
>>> storage = S3Storage(
...     config=S3Config(
...         bucket="my-bucket",
...         endpoint_url="https://account.r2.cloudflarestorage.com",
...         access_key_id="...",
...         secret_access_key="...",
...     )
... )

Note

The client is lazily initialized on first use. Credentials can come from: 1. Explicit configuration (access_key_id, secret_access_key) 2. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) 3. IAM roles (when running on EC2/ECS/Lambda)

Parameters:

config (S3Config)

__init__(config)[source]

Initialize S3Storage.

Parameters:

config (S3Config) – Configuration for the S3 backend

Raises:

ConfigurationError – If required configuration is missing

async put(key, data, *, content_type=None, metadata=None)[source]

Store data at the given key.

Parameters:
  • key (str) – Storage path/key for the file

  • data (bytes | AsyncIterator[bytes]) – File contents as bytes or async byte stream

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

Return type:

StoredFile

Returns:

StoredFile with metadata about the stored file

Raises:

StorageError – If the upload fails

async get(key)[source]

Retrieve file contents as an async byte stream.

Parameters:

key (str) – Storage path/key for the file

Yields:

Chunks of file data as bytes

Raises:
Return type:

AsyncIterator[bytes]

async get_bytes(key)[source]

Retrieve entire file contents as bytes.

Parameters:

key (str) – Storage path/key for the file

Return type:

bytes

Returns:

Complete file contents as bytes

Raises:
async delete(key)[source]

Delete a file.

Parameters:

key (str) – Storage path/key for the file

Return type:

None

Note

S3 delete is idempotent - deleting a non-existent key succeeds silently.

async exists(key)[source]

Check if a file exists.

Parameters:

key (str) – Storage path/key for the file

Return type:

bool

Returns:

True if the file exists, False otherwise

async list(prefix='', *, limit=None)[source]

List files with optional prefix filter.

Parameters:
  • prefix (str) – Filter results to keys starting with this prefix

  • limit (int | None) – Maximum number of results to return

Yields:

StoredFile metadata for each matching file

Return type:

AsyncGenerator[StoredFile, None]

async url(key, *, expires_in=None)[source]

Generate a presigned URL for accessing the file.

Parameters:
  • key (str) – Storage path/key for the file

  • expires_in (timedelta | None) – Optional expiration time (defaults to config.presigned_expiry)

Return type:

str

Returns:

Presigned URL string

Note

Presigned URLs allow temporary access to private S3 objects without requiring AWS credentials.

async copy(source, destination)[source]

Copy a file within the storage backend.

Uses S3’s native copy operation for efficiency.

Parameters:
  • source (str) – Source key to copy from

  • destination (str) – Destination key to copy to

Return type:

StoredFile

Returns:

StoredFile metadata for the new copy

Raises:
async move(source, destination)[source]

Move/rename a file within the storage backend.

Uses S3’s copy + delete operations.

Parameters:
  • source (str) – Source key to move from

  • destination (str) – Destination key to move to

Return type:

StoredFile

Returns:

StoredFile metadata for the moved file

Raises:
async info(key)[source]

Get metadata about a file without downloading it.

Parameters:

key (str) – Storage path/key for the file

Return type:

StoredFile

Returns:

StoredFile with metadata

Raises:
async close()[source]

Close the S3 storage and release resources.

This method clears the cached session. Note that aioboto3 sessions don’t require explicit cleanup, but clearing the reference allows for garbage collection and prevents accidental reuse after close.

Return type:

None

async start_multipart_upload(key, *, content_type=None, metadata=None, part_size=5242880)[source]

Start a multipart upload.

Use this for large files (typically > 100MB) to enable: - Parallel part uploads - Resumable uploads - Better handling of network failures

Parameters:
  • key (str) – Storage path/key for the file

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

  • part_size (int) – Size of each part in bytes (minimum 5MB for S3)

Return type:

MultipartUpload

Returns:

MultipartUpload object to track the upload state

Raises:

StorageError – If initiating the upload fails

async upload_part(upload, part_number, data)[source]

Upload a single part of a multipart upload.

Parameters:
  • upload (MultipartUpload) – The MultipartUpload object from start_multipart_upload

  • part_number (int) – Part number (1-indexed, must be sequential)

  • data (bytes) – The part data to upload

Return type:

str

Returns:

ETag of the uploaded part

Raises:

StorageError – If the part upload fails

async complete_multipart_upload(upload)[source]

Complete a multipart upload.

Parameters:

upload (MultipartUpload) – The MultipartUpload object with all parts uploaded

Return type:

StoredFile

Returns:

StoredFile metadata for the completed upload

Raises:

StorageError – If completing the upload fails

async abort_multipart_upload(upload)[source]

Abort a multipart upload.

This cancels an in-progress multipart upload and deletes any uploaded parts. Use this to clean up failed uploads.

Parameters:

upload (MultipartUpload) – The MultipartUpload object to abort

Raises:

StorageError – If aborting the upload fails

Return type:

None

async put_large(key, data, *, content_type=None, metadata=None, part_size=10485760, progress_callback=None)[source]

Upload a large file using multipart upload.

This is a convenience method that handles the multipart upload process automatically. It splits the data into parts, uploads them, and completes the upload.

Parameters:
  • key (str) – Storage path/key for the file

  • data (bytes | AsyncIterator[bytes]) – File contents as bytes or async byte stream

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

  • part_size (int) – Size of each part in bytes (default 10MB)

  • progress_callback (ProgressCallback | None) – Optional callback for progress updates

Return type:

StoredFile

Returns:

StoredFile with metadata about the stored file

Raises:

StorageError – If the upload fails

Usage Examples

AWS S3

from litestar_storages import S3Storage, S3Config

# Using explicit credentials
storage = S3Storage(
    config=S3Config(
        bucket="my-uploads",
        region="us-east-1",
        access_key_id="AKIAIOSFODNN7EXAMPLE",
        secret_access_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
    )
)

# Using environment variables or IAM role (recommended for production)
storage = S3Storage(
    config=S3Config(
        bucket="my-uploads",
        region="us-east-1",
    )
)

Cloudflare R2

storage = S3Storage(
    config=S3Config(
        bucket="my-bucket",
        endpoint_url="https://ACCOUNT_ID.r2.cloudflarestorage.com",
        access_key_id="R2_ACCESS_KEY",
        secret_access_key="R2_SECRET_KEY",
    )
)

DigitalOcean Spaces

storage = S3Storage(
    config=S3Config(
        bucket="my-space",
        endpoint_url="https://nyc3.digitaloceanspaces.com",
        region="nyc3",
        access_key_id="DO_SPACES_KEY",
        secret_access_key="DO_SPACES_SECRET",
    )
)

MinIO (Local Development)

storage = S3Storage(
    config=S3Config(
        bucket="test-bucket",
        endpoint_url="http://localhost:9000",
        access_key_id="minioadmin",
        secret_access_key="minioadmin",
        use_ssl=False,
    )
)

With Key Prefix

storage = S3Storage(
    config=S3Config(
        bucket="my-bucket",
        region="us-east-1",
        prefix="uploads/2024/",
    )
)

# Key "photo.jpg" becomes "uploads/2024/photo.jpg" in S3
await storage.put("photo.jpg", data)

Presigned URLs

from datetime import timedelta

# Use default expiry (1 hour)
url = await storage.url("documents/report.pdf")

# Custom expiry
url = await storage.url(
    "documents/report.pdf",
    expires_in=timedelta(hours=24),
)

# Configure default expiry
storage = S3Storage(
    config=S3Config(
        bucket="my-bucket",
        presigned_expiry=timedelta(hours=6),
    )
)

Server-Side Operations

Copy and move operations use S3’s native APIs for efficiency:

# Server-side copy (fast, no data transfer through client)
await storage.copy("source.txt", "destination.txt")

# Server-side move (copy + delete)
await storage.move("old-path.txt", "new-path.txt")

Credential Resolution Order

Credentials are resolved in this order:

  1. Explicit configuration (access_key_id, secret_access_key)

  2. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

  3. Shared credentials file (~/.aws/credentials)

  4. IAM role credentials (EC2, ECS, Lambda)

For production deployments on AWS, IAM roles are recommended over explicit credentials.