GCS Storage

Google Cloud Storage backend using gcloud-aio-storage for async operations. Supports Application Default Credentials (ADC) and service account authentication.

Note

Requires the gcloud-aio-storage package. Install with: pip install litestar-storages[gcs] or pip install gcloud-aio-storage

Configuration

class litestar_storages.backends.gcs.GCSConfig[source]

Bases: object

Configuration for Google Cloud Storage.

Supports authentication via: - Service account JSON file (service_file) - Application Default Credentials (ADC) - automatic when running on GCP - Explicit token (for testing/special cases)

Variables:
  • bucket – GCS bucket name

  • project – GCP project ID (required for some operations)

  • service_file – Path to service account JSON file

  • prefix – Key prefix for all operations (e.g., “uploads/”)

  • presigned_expiry – Default expiration time for signed URLs

  • api_root – Custom API endpoint (for emulators)

Parameters:
bucket: str
project: str | None = None
service_file: str | None = None
prefix: str = ''
presigned_expiry: timedelta
api_root: str | None = None
__init__(bucket, project=None, service_file=None, prefix='', presigned_expiry=<factory>, api_root=None)
Parameters:

Storage Class

class litestar_storages.backends.gcs.GCSStorage[source]

Bases: BaseStorage

Google Cloud Storage backend.

Uses gcloud-aio-storage for async GCS operations.

Example

>>> # Using Application Default Credentials
>>> storage = GCSStorage(
...     config=GCSConfig(
...         bucket="my-bucket",
...         project="my-project",
...     )
... )
>>> # Using service account
>>> storage = GCSStorage(
...     config=GCSConfig(
...         bucket="my-bucket",
...         service_file="/path/to/service-account.json",
...     )
... )
>>> # Using emulator (fake-gcs-server)
>>> storage = GCSStorage(
...     config=GCSConfig(
...         bucket="test-bucket",
...         api_root="http://localhost:4443",
...     )
... )

Note

The client is lazily initialized on first use. When running on GCP (GCE, GKE, Cloud Run, etc.), credentials are automatically detected.

Parameters:

config (GCSConfig)

__init__(config)[source]

Initialize GCSStorage.

Parameters:

config (GCSConfig) – Configuration for the GCS backend

Raises:

ConfigurationError – If required configuration is missing

async put(key, data, *, content_type=None, metadata=None)[source]

Store data at the given key.

Parameters:
  • key (str) – Storage path/key for the file

  • data (bytes | AsyncIterator[bytes]) – File contents as bytes or async byte stream

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

Return type:

StoredFile

Returns:

StoredFile with metadata about the stored file

Raises:

StorageError – If the upload fails

async get(key)[source]

Retrieve file contents as an async byte stream.

Parameters:

key (str) – Storage path/key for the file

Yields:

Chunks of file data as bytes

Raises:
Return type:

AsyncIterator[bytes]

async get_bytes(key)[source]

Retrieve entire file contents as bytes.

Parameters:

key (str) – Storage path/key for the file

Return type:

bytes

Returns:

Complete file contents as bytes

Raises:
async delete(key)[source]

Delete a file.

Parameters:

key (str) – Storage path/key for the file

Return type:

None

Note

GCS delete is idempotent - deleting a non-existent key succeeds silently.

async exists(key)[source]

Check if a file exists.

Parameters:

key (str) – Storage path/key for the file

Return type:

bool

Returns:

True if the file exists, False otherwise

async list(prefix='', *, limit=None)[source]

List files with optional prefix filter.

Parameters:
  • prefix (str) – Filter results to keys starting with this prefix

  • limit (int | None) – Maximum number of results to return

Yields:

StoredFile metadata for each matching file

Return type:

AsyncGenerator[StoredFile, None]

async url(key, *, expires_in=None)[source]

Generate a signed URL for accessing the file.

Parameters:
  • key (str) – Storage path/key for the file

  • expires_in (timedelta | None) – Optional expiration time (defaults to config.presigned_expiry)

Return type:

str

Returns:

Signed URL string

Note

Signed URLs allow temporary access to private GCS objects without requiring GCP credentials. Requires service account credentials.

async copy(source, destination)[source]

Copy a file within the storage backend.

Uses GCS’s native copy operation for efficiency.

Parameters:
  • source (str) – Source key to copy from

  • destination (str) – Destination key to copy to

Return type:

StoredFile

Returns:

StoredFile metadata for the new copy

Raises:
async move(source, destination)[source]

Move/rename a file within the storage backend.

Uses GCS’s copy + delete operations (no native move).

Parameters:
  • source (str) – Source key to move from

  • destination (str) – Destination key to move to

Return type:

StoredFile

Returns:

StoredFile metadata for the moved file

Raises:
async info(key)[source]

Get metadata about a file without downloading it.

Parameters:

key (str) – Storage path/key for the file

Return type:

StoredFile

Returns:

StoredFile with metadata

Raises:
async start_multipart_upload(key, *, content_type=None, metadata=None, part_size=5242880)[source]

Start a multipart upload.

Use this for large files (typically > 100MB) to enable: - Chunked uploads with progress tracking - Better handling of network failures - Memory-efficient streaming uploads

Note

GCS doesn’t have native multipart upload like S3. This implementation buffers parts in memory and uploads them when complete_multipart_upload is called. For true resumable uploads, consider using GCS’s resumable upload API directly.

Parameters:
  • key (str) – Storage path/key for the file

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

  • part_size (int) – Size of each part in bytes (default 5MB)

Return type:

MultipartUpload

Returns:

MultipartUpload object to track the upload state

Raises:

StorageError – If initiating the upload fails

async upload_part(upload, part_number, data)[source]

Upload a single part of a multipart upload.

Note

Parts are buffered in memory until complete_multipart_upload is called.

Parameters:
  • upload (MultipartUpload) – The MultipartUpload object from start_multipart_upload

  • part_number (int) – Part number (1-indexed, must be sequential)

  • data (bytes) – The part data to upload

Return type:

str

Returns:

ETag (placeholder) of the uploaded part

Raises:

StorageError – If the part upload fails

async complete_multipart_upload(upload)[source]

Complete a multipart upload.

Combines all buffered parts and uploads the complete file to GCS.

Parameters:

upload (MultipartUpload) – The MultipartUpload object with all parts uploaded

Return type:

StoredFile

Returns:

StoredFile metadata for the completed upload

Raises:

StorageError – If completing the upload fails

async abort_multipart_upload(upload)[source]

Abort a multipart upload.

This cancels an in-progress multipart upload and frees buffered data. Use this to clean up failed uploads.

Parameters:

upload (MultipartUpload) – The MultipartUpload object to abort

Raises:

StorageError – If aborting the upload fails

Return type:

None

async put_large(key, data, *, content_type=None, metadata=None, part_size=10485760, progress_callback=None)[source]

Upload a large file using multipart upload.

This is a convenience method that handles the multipart upload process automatically. It splits the data into parts, uploads them, and completes the upload.

Parameters:
  • key (str) – Storage path/key for the file

  • data (bytes | AsyncIterator[bytes]) – File contents as bytes or async byte stream

  • content_type (str | None) – MIME type of the content

  • metadata (dict[str, str] | None) – Additional metadata to store with the file

  • part_size (int) – Size of each part in bytes (default 10MB)

  • progress_callback (ProgressCallback | None) – Optional callback for progress updates

Return type:

StoredFile

Returns:

StoredFile with metadata about the stored file

Raises:

StorageError – If the upload fails

async close()[source]

Close the GCS storage and release resources.

This method closes the underlying aiohttp session.

Return type:

None

Usage Examples

Using Application Default Credentials

When running on GCP (GCE, GKE, Cloud Run, Cloud Functions), credentials are automatically detected:

from litestar_storages import GCSStorage, GCSConfig

storage = GCSStorage(
    config=GCSConfig(
        bucket="my-bucket",
        project="my-project",
    )
)

Using Service Account

For local development or non-GCP environments:

storage = GCSStorage(
    config=GCSConfig(
        bucket="my-bucket",
        service_file="/path/to/service-account.json",
    )
)

Using Emulator (fake-gcs-server)

For local testing without GCP access:

# Start emulator: docker run -p 4443:4443 fsouza/fake-gcs-server

storage = GCSStorage(
    config=GCSConfig(
        bucket="test-bucket",
        api_root="http://localhost:4443",
    )
)

With Key Prefix

storage = GCSStorage(
    config=GCSConfig(
        bucket="my-bucket",
        prefix="uploads/media/",
    )
)

# Key "image.jpg" becomes "uploads/media/image.jpg" in GCS
await storage.put("image.jpg", data)

Signed URLs

from datetime import timedelta

# Generate signed URL with default expiry (1 hour)
url = await storage.url("documents/contract.pdf")

# Custom expiry
url = await storage.url(
    "documents/contract.pdf",
    expires_in=timedelta(days=7),
)

Note

Generating signed URLs requires service account credentials with the iam.serviceAccounts.signBlob permission.

File Operations

# Upload
result = await storage.put(
    "reports/monthly.pdf",
    pdf_bytes,
    content_type="application/pdf",
    metadata={"department": "finance"},
)

# Download
data = await storage.get_bytes("reports/monthly.pdf")

# Stream download
async for chunk in storage.get("reports/monthly.pdf"):
    process(chunk)

# Check existence
if await storage.exists("reports/monthly.pdf"):
    print("File exists")

# Get metadata
info = await storage.info("reports/monthly.pdf")
print(f"Size: {info.size}, Type: {info.content_type}")

# List files
async for file in storage.list("reports/"):
    print(f"{file.key}: {file.size} bytes")

# Copy and move
await storage.copy("reports/v1.pdf", "reports/v2.pdf")
await storage.move("reports/draft.pdf", "reports/final.pdf")

# Delete
await storage.delete("reports/old.pdf")

Authentication Methods

GCS supports multiple authentication methods:

  1. Application Default Credentials (ADC) - Automatic on GCP

  2. Service Account JSON - Via service_file config

  3. Environment Variable - Set GOOGLE_APPLICATION_CREDENTIALS

For production on GCP, ADC with Workload Identity is recommended.

Resource Cleanup

Always close the storage when done to release HTTP sessions:

storage = GCSStorage(config=GCSConfig(bucket="my-bucket"))
try:
    # Use storage...
    pass
finally:
    await storage.close()

When using StoragePlugin, cleanup is handled automatically on application shutdown.