Azure Storage

Azure Blob Storage backend using azure-storage-blob async API. Supports connection strings, account keys, and managed identity authentication.

Note

Requires the azure-storage-blob package. Install with: pip install litestar-storages[azure] or pip install azure-storage-blob

For managed identity support, also install azure-identity.

Configuration

class litestar_storages.backends.azure.AzureConfig[source]

Bases: object

Configuration for Azure Blob Storage.

Supports authentication via: - Connection string (connection_string) - Account URL + credential (account_url + account_key or DefaultAzureCredential) - SAS token (account_url with SAS token embedded)

Variables:
  • container – Azure Blob container name

  • account_url – Azure storage account URL (e.g., https://<account>.blob.core.windows.net)

  • account_key – Storage account access key (optional if using connection string or DefaultAzureCredential)

  • connection_string – Full connection string (alternative to account_url + account_key)

  • prefix – Key prefix for all operations (e.g., “uploads/”)

  • presigned_expiry – Default expiration time for SAS URLs

Parameters:
container: str
account_url: str | None = None
account_key: str | None = None
connection_string: str | None = None
prefix: str = ''
presigned_expiry: timedelta
__init__(container, account_url=None, account_key=None, connection_string=None, prefix='', presigned_expiry=<factory>)
Parameters:

Storage Class

class litestar_storages.backends.azure.AzureStorage[source]

Bases: BaseStorage

Azure Blob Storage backend.

Uses azure-storage-blob async API for all operations.

Example

>>> # Using connection string
>>> storage = AzureStorage(
...     config=AzureConfig(
...         container="my-container",
...         connection_string="DefaultEndpointsProtocol=https;...",
...     )
... )
>>> # Using account URL and key
>>> storage = AzureStorage(
...     config=AzureConfig(
...         container="my-container",
...         account_url="https://myaccount.blob.core.windows.net",
...         account_key="my-access-key",
...     )
... )
>>> # Using Azurite emulator
>>> storage = AzureStorage(
...     config=AzureConfig(
...         container="test-container",
...         connection_string="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=...;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1",
...     )
... )

Note

The client is lazily initialized on first use. When running on Azure (App Service, Functions, AKS, etc.), credentials can be automatically detected using DefaultAzureCredential.

Parameters:

config (AzureConfig)

__init__(config)[source]

Initialize AzureStorage.

Parameters:

config (AzureConfig) – Configuration for the Azure Blob backend

Raises:

ConfigurationError – If required configuration is missing

async put(key, data, *, content_type=None, metadata=None)[source]

Store data at the given key.

Parameters:
  • key (str) – Storage path/key for the file

  • data (bytes | AsyncIterator[bytes]) – File contents as bytes or async byte stream

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

Return type:

StoredFile

Returns:

StoredFile with metadata about the stored file

Raises:

StorageError – If the upload fails

async get(key)[source]

Retrieve file contents as an async byte stream.

Parameters:

key (str) – Storage path/key for the file

Yields:

Chunks of file data as bytes

Raises:
Return type:

AsyncIterator[bytes]

async get_bytes(key)[source]

Retrieve entire file contents as bytes.

Parameters:

key (str) – Storage path/key for the file

Return type:

bytes

Returns:

Complete file contents as bytes

Raises:
async delete(key)[source]

Delete a file.

Parameters:

key (str) – Storage path/key for the file

Return type:

None

Note

Deleting a non-existent key will raise an error (unlike S3/GCS). Use exists() first if you need idempotent deletes.

async exists(key)[source]

Check if a file exists.

Parameters:

key (str) – Storage path/key for the file

Return type:

bool

Returns:

True if the file exists, False otherwise

async list(prefix='', *, limit=None)[source]

List files with optional prefix filter.

Parameters:
  • prefix (str) – Filter results to keys starting with this prefix

  • limit (int | None) – Maximum number of results to return

Yields:

StoredFile metadata for each matching file

Return type:

AsyncGenerator[StoredFile, None]

async url(key, *, expires_in=None)[source]

Generate a SAS URL for accessing the file.

Parameters:
  • key (str) – Storage path/key for the file

  • expires_in (timedelta | None) – Optional expiration time (defaults to config.presigned_expiry)

Return type:

str

Returns:

SAS URL string

Note

SAS URLs allow temporary access to private Azure blobs without requiring Azure credentials. Requires account key for signing.

async copy(source, destination)[source]

Copy a file within the storage backend.

Uses Azure’s native copy operation for efficiency.

Parameters:
  • source (str) – Source key to copy from

  • destination (str) – Destination key to copy to

Return type:

StoredFile

Returns:

StoredFile metadata for the new copy

Raises:
async move(source, destination)[source]

Move/rename a file within the storage backend.

Uses Azure’s copy + delete operations (no native move).

Parameters:
  • source (str) – Source key to move from

  • destination (str) – Destination key to move to

Return type:

StoredFile

Returns:

StoredFile metadata for the moved file

Raises:
async info(key)[source]

Get metadata about a file without downloading it.

Parameters:

key (str) – Storage path/key for the file

Return type:

StoredFile

Returns:

StoredFile with metadata

Raises:
async close()[source]

Close the Azure storage and release resources.

This method closes the underlying aiohttp session.

Return type:

None

async start_multipart_upload(key, *, content_type=None, metadata=None, part_size=4194304)[source]

Start a multipart upload using Azure Block Blobs.

Azure Block Blobs support uploading blocks (up to 50,000 blocks per blob) and then committing them as a single blob.

Parameters:
  • key (str) – Storage path/key for the file

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

  • part_size (int) – Size of each part in bytes (default 4MB, max 4000MB per block)

Return type:

MultipartUpload

Returns:

MultipartUpload object to track the upload state

Raises:

StorageError – If initiating the upload fails

Note

Azure doesn’t have an explicit “start multipart upload” API. Instead, we generate a unique upload_id and track it locally. The actual multipart upload happens when blocks are staged and committed.

async upload_part(upload, part_number, data)[source]

Upload a single part (block) of a multipart upload.

Azure uses block IDs to identify blocks. Block IDs must be: - Base64-encoded strings - Unique within the blob - Same length for all blocks in the blob

Parameters:
  • upload (MultipartUpload) – The MultipartUpload object from start_multipart_upload

  • part_number (int) – Part number (1-indexed)

  • data (bytes) – The part data to upload

Return type:

str

Returns:

Block ID (base64-encoded) of the uploaded block

Raises:

StorageError – If the part upload fails

async complete_multipart_upload(upload)[source]

Complete a multipart upload by committing all blocks.

Parameters:

upload (MultipartUpload) – The MultipartUpload object with all parts uploaded

Return type:

StoredFile

Returns:

StoredFile metadata for the completed upload

Raises:

StorageError – If completing the upload fails

async abort_multipart_upload(upload)[source]

Abort a multipart upload.

Note

Azure automatically garbage-collects uncommitted blocks after 7 days. There is no explicit “abort” operation needed - simply don’t commit the block list. This method is provided for API consistency but is essentially a no-op.

Parameters:

upload (MultipartUpload) – The MultipartUpload object to abort

Return type:

None

async put_large(key, data, *, content_type=None, metadata=None, part_size=4194304, progress_callback=None)[source]

Upload a large file using multipart upload.

This is a convenience method that handles the multipart upload process automatically. It splits the data into blocks, uploads them, and commits the block list.

Parameters:
  • key (str) – Storage path/key for the file

  • data (bytes | AsyncIterator[bytes]) – File contents as bytes or async byte stream

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

  • part_size (int) – Size of each part in bytes (default 4MB, max 4000MB per block)

  • progress_callback (ProgressCallback | None) – Optional callback for progress updates

Return type:

StoredFile

Returns:

StoredFile with metadata about the stored file

Raises:

StorageError – If the upload fails

Note

Azure Block Blobs support up to 50,000 blocks per blob. Each block can be up to 4000MB in size.

Usage Examples

Using Connection String

The simplest authentication method:

from litestar_storages import AzureStorage, AzureConfig

storage = AzureStorage(
    config=AzureConfig(
        container="my-container",
        connection_string="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=...;EndpointSuffix=core.windows.net",
    )
)

Using Account URL and Key

storage = AzureStorage(
    config=AzureConfig(
        container="my-container",
        account_url="https://myaccount.blob.core.windows.net",
        account_key="your-account-key",
    )
)

Using Managed Identity

When running on Azure (App Service, Functions, AKS, etc.):

# Requires: pip install azure-identity

storage = AzureStorage(
    config=AzureConfig(
        container="my-container",
        account_url="https://myaccount.blob.core.windows.net",
        # No account_key - uses DefaultAzureCredential
    )
)

Using Azurite Emulator

For local development:

# Start Azurite
docker run -p 10000:10000 -p 10001:10001 -p 10002:10002 \
    mcr.microsoft.com/azure-storage/azurite
storage = AzureStorage(
    config=AzureConfig(
        container="test-container",
        connection_string="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1",
    )
)

With Key Prefix

storage = AzureStorage(
    config=AzureConfig(
        container="uploads",
        connection_string="...",
        prefix="media/images/",
    )
)

# Key "photo.jpg" becomes "media/images/photo.jpg" in Azure
await storage.put("photo.jpg", data)

SAS URLs

Generate shared access signature URLs for temporary access:

from datetime import timedelta

# Default expiry (1 hour)
url = await storage.url("documents/report.pdf")

# Custom expiry
url = await storage.url(
    "documents/report.pdf",
    expires_in=timedelta(hours=24),
)

Note

Generating SAS URLs requires the account key (either directly via account_key or parsed from connection_string).

File Operations

# Upload with metadata
result = await storage.put(
    "documents/contract.pdf",
    pdf_bytes,
    content_type="application/pdf",
    metadata={"client": "acme", "version": "2"},
)

# Download
data = await storage.get_bytes("documents/contract.pdf")

# Stream download
async for chunk in storage.get("documents/contract.pdf"):
    await response.write(chunk)

# Check existence
exists = await storage.exists("documents/contract.pdf")

# Get metadata
info = await storage.info("documents/contract.pdf")
print(f"Size: {info.size}, ETag: {info.etag}")
print(f"Metadata: {info.metadata}")

# List files
async for file in storage.list("documents/"):
    print(f"{file.key}: {file.size} bytes")

# Copy (server-side)
await storage.copy("source.txt", "destination.txt")

# Move (copy + delete)
await storage.move("old-path.txt", "new-path.txt")

# Delete
await storage.delete("documents/old.pdf")

Note

Unlike S3 and GCS, Azure delete raises an error for non-existent blobs. Use exists() first if you need idempotent deletes.

Authentication Methods

Azure supports multiple authentication methods:

  1. Connection String - Contains account name and key

  2. Account URL + Key - Explicit credentials

  3. Account URL + Managed Identity - Using DefaultAzureCredential

  4. SAS Token - Embedded in account URL

For production on Azure, managed identity is recommended.

Resource Cleanup

Always close the storage when done:

storage = AzureStorage(config=AzureConfig(...))
try:
    # Use storage...
    pass
finally:
    await storage.close()

When using StoragePlugin, cleanup is handled automatically on application shutdown.