Azure Storage¶
Azure Blob Storage backend using azure-storage-blob async API. Supports
connection strings, account keys, and managed identity authentication.
Note
Requires the azure-storage-blob package. Install with:
pip install litestar-storages[azure] or pip install azure-storage-blob
For managed identity support, also install azure-identity.
Configuration¶
- class litestar_storages.backends.azure.AzureConfig[source]¶
Bases:
objectConfiguration for Azure Blob Storage.
Supports authentication via: - Connection string (connection_string) - Account URL + credential (account_url + account_key or DefaultAzureCredential) - SAS token (account_url with SAS token embedded)
- Variables:
container – Azure Blob container name
account_url – Azure storage account URL (e.g., https://<account>.blob.core.windows.net)
account_key – Storage account access key (optional if using connection string or DefaultAzureCredential)
connection_string – Full connection string (alternative to account_url + account_key)
prefix – Key prefix for all operations (e.g., “uploads/”)
presigned_expiry – Default expiration time for SAS URLs
- Parameters:
Storage Class¶
- class litestar_storages.backends.azure.AzureStorage[source]¶
Bases:
BaseStorageAzure Blob Storage backend.
Uses azure-storage-blob async API for all operations.
Example
>>> # Using connection string >>> storage = AzureStorage( ... config=AzureConfig( ... container="my-container", ... connection_string="DefaultEndpointsProtocol=https;...", ... ) ... )
>>> # Using account URL and key >>> storage = AzureStorage( ... config=AzureConfig( ... container="my-container", ... account_url="https://myaccount.blob.core.windows.net", ... account_key="my-access-key", ... ) ... )
>>> # Using Azurite emulator >>> storage = AzureStorage( ... config=AzureConfig( ... container="test-container", ... connection_string="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=...;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1", ... ) ... )
Note
The client is lazily initialized on first use. When running on Azure (App Service, Functions, AKS, etc.), credentials can be automatically detected using DefaultAzureCredential.
- Parameters:
config (
AzureConfig)
- __init__(config)[source]¶
Initialize AzureStorage.
- Parameters:
config (
AzureConfig) – Configuration for the Azure Blob backend- Raises:
ConfigurationError – If required configuration is missing
- async put(key, data, *, content_type=None, metadata=None)[source]¶
Store data at the given key.
- Parameters:
- Return type:
- Returns:
StoredFile with metadata about the stored file
- Raises:
StorageError – If the upload fails
- async get(key)[source]¶
Retrieve file contents as an async byte stream.
- Parameters:
key (
str) – Storage path/key for the file- Yields:
Chunks of file data as bytes
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If the retrieval fails
- Return type:
- async get_bytes(key)[source]¶
Retrieve entire file contents as bytes.
- Parameters:
key (
str) – Storage path/key for the file- Return type:
- Returns:
Complete file contents as bytes
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If the retrieval fails
- async delete(key)[source]¶
Delete a file.
Note
Deleting a non-existent key will raise an error (unlike S3/GCS). Use exists() first if you need idempotent deletes.
- async list(prefix='', *, limit=None)[source]¶
List files with optional prefix filter.
- Parameters:
- Yields:
StoredFile metadata for each matching file
- Return type:
- async url(key, *, expires_in=None)[source]¶
Generate a SAS URL for accessing the file.
- Parameters:
- Return type:
- Returns:
SAS URL string
Note
SAS URLs allow temporary access to private Azure blobs without requiring Azure credentials. Requires account key for signing.
- async copy(source, destination)[source]¶
Copy a file within the storage backend.
Uses Azure’s native copy operation for efficiency.
- Parameters:
- Return type:
- Returns:
StoredFile metadata for the new copy
- Raises:
StorageFileNotFoundError – If the source file does not exist
StorageError – If the copy fails
- async move(source, destination)[source]¶
Move/rename a file within the storage backend.
Uses Azure’s copy + delete operations (no native move).
- Parameters:
- Return type:
- Returns:
StoredFile metadata for the moved file
- Raises:
StorageFileNotFoundError – If the source file does not exist
StorageError – If the move fails
- async info(key)[source]¶
Get metadata about a file without downloading it.
- Parameters:
key (
str) – Storage path/key for the file- Return type:
- Returns:
StoredFile with metadata
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If retrieving metadata fails
- async close()[source]¶
Close the Azure storage and release resources.
This method closes the underlying aiohttp session.
- Return type:
- async start_multipart_upload(key, *, content_type=None, metadata=None, part_size=4194304)[source]¶
Start a multipart upload using Azure Block Blobs.
Azure Block Blobs support uploading blocks (up to 50,000 blocks per blob) and then committing them as a single blob.
- Parameters:
- Return type:
MultipartUpload- Returns:
MultipartUpload object to track the upload state
- Raises:
StorageError – If initiating the upload fails
Note
Azure doesn’t have an explicit “start multipart upload” API. Instead, we generate a unique upload_id and track it locally. The actual multipart upload happens when blocks are staged and committed.
- async upload_part(upload, part_number, data)[source]¶
Upload a single part (block) of a multipart upload.
Azure uses block IDs to identify blocks. Block IDs must be: - Base64-encoded strings - Unique within the blob - Same length for all blocks in the blob
- Parameters:
- Return type:
- Returns:
Block ID (base64-encoded) of the uploaded block
- Raises:
StorageError – If the part upload fails
- async complete_multipart_upload(upload)[source]¶
Complete a multipart upload by committing all blocks.
- Parameters:
upload (
MultipartUpload) – The MultipartUpload object with all parts uploaded- Return type:
- Returns:
StoredFile metadata for the completed upload
- Raises:
StorageError – If completing the upload fails
- async abort_multipart_upload(upload)[source]¶
Abort a multipart upload.
Note
Azure automatically garbage-collects uncommitted blocks after 7 days. There is no explicit “abort” operation needed - simply don’t commit the block list. This method is provided for API consistency but is essentially a no-op.
- Parameters:
upload (
MultipartUpload) – The MultipartUpload object to abort- Return type:
- async put_large(key, data, *, content_type=None, metadata=None, part_size=4194304, progress_callback=None)[source]¶
Upload a large file using multipart upload.
This is a convenience method that handles the multipart upload process automatically. It splits the data into blocks, uploads them, and commits the block list.
- Parameters:
key (
str) – Storage path/key for the filedata (
bytes|AsyncIterator[bytes]) – File contents as bytes or async byte streammetadata (
dict[str,str] |None) – Additional metadata to store with the filepart_size (
int) – Size of each part in bytes (default 4MB, max 4000MB per block)progress_callback (
ProgressCallback|None) – Optional callback for progress updates
- Return type:
- Returns:
StoredFile with metadata about the stored file
- Raises:
StorageError – If the upload fails
Note
Azure Block Blobs support up to 50,000 blocks per blob. Each block can be up to 4000MB in size.
Usage Examples¶
Using Connection String¶
The simplest authentication method:
from litestar_storages import AzureStorage, AzureConfig
storage = AzureStorage(
config=AzureConfig(
container="my-container",
connection_string="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=...;EndpointSuffix=core.windows.net",
)
)
Using Account URL and Key¶
storage = AzureStorage(
config=AzureConfig(
container="my-container",
account_url="https://myaccount.blob.core.windows.net",
account_key="your-account-key",
)
)
Using Managed Identity¶
When running on Azure (App Service, Functions, AKS, etc.):
# Requires: pip install azure-identity
storage = AzureStorage(
config=AzureConfig(
container="my-container",
account_url="https://myaccount.blob.core.windows.net",
# No account_key - uses DefaultAzureCredential
)
)
Using Azurite Emulator¶
For local development:
# Start Azurite
docker run -p 10000:10000 -p 10001:10001 -p 10002:10002 \
mcr.microsoft.com/azure-storage/azurite
storage = AzureStorage(
config=AzureConfig(
container="test-container",
connection_string="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1",
)
)
With Key Prefix¶
storage = AzureStorage(
config=AzureConfig(
container="uploads",
connection_string="...",
prefix="media/images/",
)
)
# Key "photo.jpg" becomes "media/images/photo.jpg" in Azure
await storage.put("photo.jpg", data)
SAS URLs¶
Generate shared access signature URLs for temporary access:
from datetime import timedelta
# Default expiry (1 hour)
url = await storage.url("documents/report.pdf")
# Custom expiry
url = await storage.url(
"documents/report.pdf",
expires_in=timedelta(hours=24),
)
Note
Generating SAS URLs requires the account key (either directly via
account_key or parsed from connection_string).
File Operations¶
# Upload with metadata
result = await storage.put(
"documents/contract.pdf",
pdf_bytes,
content_type="application/pdf",
metadata={"client": "acme", "version": "2"},
)
# Download
data = await storage.get_bytes("documents/contract.pdf")
# Stream download
async for chunk in storage.get("documents/contract.pdf"):
await response.write(chunk)
# Check existence
exists = await storage.exists("documents/contract.pdf")
# Get metadata
info = await storage.info("documents/contract.pdf")
print(f"Size: {info.size}, ETag: {info.etag}")
print(f"Metadata: {info.metadata}")
# List files
async for file in storage.list("documents/"):
print(f"{file.key}: {file.size} bytes")
# Copy (server-side)
await storage.copy("source.txt", "destination.txt")
# Move (copy + delete)
await storage.move("old-path.txt", "new-path.txt")
# Delete
await storage.delete("documents/old.pdf")
Note
Unlike S3 and GCS, Azure delete raises an error for non-existent blobs.
Use exists() first if you need idempotent deletes.
Authentication Methods¶
Azure supports multiple authentication methods:
Connection String - Contains account name and key
Account URL + Key - Explicit credentials
Account URL + Managed Identity - Using DefaultAzureCredential
SAS Token - Embedded in account URL
For production on Azure, managed identity is recommended.
Resource Cleanup¶
Always close the storage when done:
storage = AzureStorage(config=AzureConfig(...))
try:
# Use storage...
pass
finally:
await storage.close()
When using StoragePlugin,
cleanup is handled automatically on application shutdown.