S3 Storage¶
Amazon S3 and S3-compatible storage backend using aioboto3 for async operations.
Supports AWS S3 and S3-compatible services like Cloudflare R2, DigitalOcean Spaces,
MinIO, and Backblaze B2.
Note
Requires the aioboto3 package. Install with:
pip install litestar-storages[s3] or pip install aioboto3
Configuration¶
- class litestar_storages.backends.s3.S3Config[source]¶
Bases:
objectConfiguration for S3-compatible storage.
Supports AWS S3 and S3-compatible services like: - Cloudflare R2 - DigitalOcean Spaces - MinIO - Backblaze B2
- Variables:
bucket – S3 bucket name
region – AWS region (e.g., “us-east-1”)
endpoint_url – Custom endpoint for S3-compatible services
access_key_id – AWS access key ID (falls back to environment/IAM)
secret_access_key – AWS secret access key
session_token – AWS session token for temporary credentials
prefix – Key prefix for all operations (e.g., “uploads/”)
presigned_expiry – Default expiration time for presigned URLs
use_ssl – Use SSL/TLS for connections
verify_ssl – Verify SSL certificates
max_pool_connections – Maximum connection pool size
- Parameters:
- __init__(bucket, region=None, endpoint_url=None, access_key_id=None, secret_access_key=None, session_token=None, prefix='', presigned_expiry=<factory>, use_ssl=True, verify_ssl=True, max_pool_connections=10)¶
Storage Class¶
- class litestar_storages.backends.s3.S3Storage[source]¶
Bases:
BaseStorageAmazon S3 and S3-compatible storage backend.
Uses aioboto3 for async S3 operations with support for AWS S3 and S3-compatible services.
Example
>>> # AWS S3 >>> storage = S3Storage( ... config=S3Config( ... bucket="my-bucket", ... region="us-east-1", ... ) ... )
>>> # Cloudflare R2 >>> storage = S3Storage( ... config=S3Config( ... bucket="my-bucket", ... endpoint_url="https://account.r2.cloudflarestorage.com", ... access_key_id="...", ... secret_access_key="...", ... ) ... )
Note
The client is lazily initialized on first use. Credentials can come from: 1. Explicit configuration (access_key_id, secret_access_key) 2. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) 3. IAM roles (when running on EC2/ECS/Lambda)
- Parameters:
config (
S3Config)
- __init__(config)[source]¶
Initialize S3Storage.
- Parameters:
config (
S3Config) – Configuration for the S3 backend- Raises:
ConfigurationError – If required configuration is missing
- async put(key, data, *, content_type=None, metadata=None)[source]¶
Store data at the given key.
- Parameters:
- Return type:
- Returns:
StoredFile with metadata about the stored file
- Raises:
StorageError – If the upload fails
- async get(key)[source]¶
Retrieve file contents as an async byte stream.
- Parameters:
key (
str) – Storage path/key for the file- Yields:
Chunks of file data as bytes
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If the retrieval fails
- Return type:
- async get_bytes(key)[source]¶
Retrieve entire file contents as bytes.
- Parameters:
key (
str) – Storage path/key for the file- Return type:
- Returns:
Complete file contents as bytes
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If the retrieval fails
- async delete(key)[source]¶
Delete a file.
Note
S3 delete is idempotent - deleting a non-existent key succeeds silently.
- async list(prefix='', *, limit=None)[source]¶
List files with optional prefix filter.
- Parameters:
- Yields:
StoredFile metadata for each matching file
- Return type:
- async url(key, *, expires_in=None)[source]¶
Generate a presigned URL for accessing the file.
- Parameters:
- Return type:
- Returns:
Presigned URL string
Note
Presigned URLs allow temporary access to private S3 objects without requiring AWS credentials.
- async copy(source, destination)[source]¶
Copy a file within the storage backend.
Uses S3’s native copy operation for efficiency.
- Parameters:
- Return type:
- Returns:
StoredFile metadata for the new copy
- Raises:
FileNotFoundError – If the source file does not exist
StorageError – If the copy fails
- async move(source, destination)[source]¶
Move/rename a file within the storage backend.
Uses S3’s copy + delete operations.
- Parameters:
- Return type:
- Returns:
StoredFile metadata for the moved file
- Raises:
FileNotFoundError – If the source file does not exist
StorageError – If the move fails
- async info(key)[source]¶
Get metadata about a file without downloading it.
- Parameters:
key (
str) – Storage path/key for the file- Return type:
- Returns:
StoredFile with metadata
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If retrieving metadata fails
- async close()[source]¶
Close the S3 storage and release resources.
This method clears the cached session. Note that aioboto3 sessions don’t require explicit cleanup, but clearing the reference allows for garbage collection and prevents accidental reuse after close.
- Return type:
- async start_multipart_upload(key, *, content_type=None, metadata=None, part_size=5242880)[source]¶
Start a multipart upload.
Use this for large files (typically > 100MB) to enable: - Parallel part uploads - Resumable uploads - Better handling of network failures
- Parameters:
- Return type:
MultipartUpload- Returns:
MultipartUpload object to track the upload state
- Raises:
StorageError – If initiating the upload fails
- async upload_part(upload, part_number, data)[source]¶
Upload a single part of a multipart upload.
- Parameters:
- Return type:
- Returns:
ETag of the uploaded part
- Raises:
StorageError – If the part upload fails
- async complete_multipart_upload(upload)[source]¶
Complete a multipart upload.
- Parameters:
upload (
MultipartUpload) – The MultipartUpload object with all parts uploaded- Return type:
- Returns:
StoredFile metadata for the completed upload
- Raises:
StorageError – If completing the upload fails
- async abort_multipart_upload(upload)[source]¶
Abort a multipart upload.
This cancels an in-progress multipart upload and deletes any uploaded parts. Use this to clean up failed uploads.
- Parameters:
upload (
MultipartUpload) – The MultipartUpload object to abort- Raises:
StorageError – If aborting the upload fails
- Return type:
- async put_large(key, data, *, content_type=None, metadata=None, part_size=10485760, progress_callback=None)[source]¶
Upload a large file using multipart upload.
This is a convenience method that handles the multipart upload process automatically. It splits the data into parts, uploads them, and completes the upload.
- Parameters:
key (
str) – Storage path/key for the filedata (
bytes|AsyncIterator[bytes]) – File contents as bytes or async byte streammetadata (
dict[str,str] |None) – Additional metadata to store with the filepart_size (
int) – Size of each part in bytes (default 10MB)progress_callback (
ProgressCallback|None) – Optional callback for progress updates
- Return type:
- Returns:
StoredFile with metadata about the stored file
- Raises:
StorageError – If the upload fails
Usage Examples¶
AWS S3¶
from litestar_storages import S3Storage, S3Config
# Using explicit credentials
storage = S3Storage(
config=S3Config(
bucket="my-uploads",
region="us-east-1",
access_key_id="AKIAIOSFODNN7EXAMPLE",
secret_access_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
)
)
# Using environment variables or IAM role (recommended for production)
storage = S3Storage(
config=S3Config(
bucket="my-uploads",
region="us-east-1",
)
)
Cloudflare R2¶
storage = S3Storage(
config=S3Config(
bucket="my-bucket",
endpoint_url="https://ACCOUNT_ID.r2.cloudflarestorage.com",
access_key_id="R2_ACCESS_KEY",
secret_access_key="R2_SECRET_KEY",
)
)
DigitalOcean Spaces¶
storage = S3Storage(
config=S3Config(
bucket="my-space",
endpoint_url="https://nyc3.digitaloceanspaces.com",
region="nyc3",
access_key_id="DO_SPACES_KEY",
secret_access_key="DO_SPACES_SECRET",
)
)
MinIO (Local Development)¶
storage = S3Storage(
config=S3Config(
bucket="test-bucket",
endpoint_url="http://localhost:9000",
access_key_id="minioadmin",
secret_access_key="minioadmin",
use_ssl=False,
)
)
With Key Prefix¶
storage = S3Storage(
config=S3Config(
bucket="my-bucket",
region="us-east-1",
prefix="uploads/2024/",
)
)
# Key "photo.jpg" becomes "uploads/2024/photo.jpg" in S3
await storage.put("photo.jpg", data)
Presigned URLs¶
from datetime import timedelta
# Use default expiry (1 hour)
url = await storage.url("documents/report.pdf")
# Custom expiry
url = await storage.url(
"documents/report.pdf",
expires_in=timedelta(hours=24),
)
# Configure default expiry
storage = S3Storage(
config=S3Config(
bucket="my-bucket",
presigned_expiry=timedelta(hours=6),
)
)
Server-Side Operations¶
Copy and move operations use S3’s native APIs for efficiency:
# Server-side copy (fast, no data transfer through client)
await storage.copy("source.txt", "destination.txt")
# Server-side move (copy + delete)
await storage.move("old-path.txt", "new-path.txt")
Credential Resolution Order¶
Credentials are resolved in this order:
Explicit configuration (
access_key_id,secret_access_key)Environment variables (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY)Shared credentials file (
~/.aws/credentials)IAM role credentials (EC2, ECS, Lambda)
For production deployments on AWS, IAM roles are recommended over explicit credentials.