GCS Storage¶
Google Cloud Storage backend using gcloud-aio-storage for async operations.
Supports Application Default Credentials (ADC) and service account authentication.
Note
Requires the gcloud-aio-storage package. Install with:
pip install litestar-storages[gcs] or pip install gcloud-aio-storage
Configuration¶
- class litestar_storages.backends.gcs.GCSConfig[source]¶
Bases:
objectConfiguration for Google Cloud Storage.
Supports authentication via: - Service account JSON file (service_file) - Application Default Credentials (ADC) - automatic when running on GCP - Explicit token (for testing/special cases)
- Variables:
bucket – GCS bucket name
project – GCP project ID (required for some operations)
service_file – Path to service account JSON file
prefix – Key prefix for all operations (e.g., “uploads/”)
presigned_expiry – Default expiration time for signed URLs
api_root – Custom API endpoint (for emulators)
- Parameters:
Storage Class¶
- class litestar_storages.backends.gcs.GCSStorage[source]¶
Bases:
BaseStorageGoogle Cloud Storage backend.
Uses gcloud-aio-storage for async GCS operations.
Example
>>> # Using Application Default Credentials >>> storage = GCSStorage( ... config=GCSConfig( ... bucket="my-bucket", ... project="my-project", ... ) ... )
>>> # Using service account >>> storage = GCSStorage( ... config=GCSConfig( ... bucket="my-bucket", ... service_file="/path/to/service-account.json", ... ) ... )
>>> # Using emulator (fake-gcs-server) >>> storage = GCSStorage( ... config=GCSConfig( ... bucket="test-bucket", ... api_root="http://localhost:4443", ... ) ... )
Note
The client is lazily initialized on first use. When running on GCP (GCE, GKE, Cloud Run, etc.), credentials are automatically detected.
- Parameters:
config (
GCSConfig)
- __init__(config)[source]¶
Initialize GCSStorage.
- Parameters:
config (
GCSConfig) – Configuration for the GCS backend- Raises:
ConfigurationError – If required configuration is missing
- async put(key, data, *, content_type=None, metadata=None)[source]¶
Store data at the given key.
- Parameters:
- Return type:
- Returns:
StoredFile with metadata about the stored file
- Raises:
StorageError – If the upload fails
- async get(key)[source]¶
Retrieve file contents as an async byte stream.
- Parameters:
key (
str) – Storage path/key for the file- Yields:
Chunks of file data as bytes
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If the retrieval fails
- Return type:
- async get_bytes(key)[source]¶
Retrieve entire file contents as bytes.
- Parameters:
key (
str) – Storage path/key for the file- Return type:
- Returns:
Complete file contents as bytes
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If the retrieval fails
- async delete(key)[source]¶
Delete a file.
Note
GCS delete is idempotent - deleting a non-existent key succeeds silently.
- async list(prefix='', *, limit=None)[source]¶
List files with optional prefix filter.
- Parameters:
- Yields:
StoredFile metadata for each matching file
- Return type:
- async url(key, *, expires_in=None)[source]¶
Generate a signed URL for accessing the file.
- Parameters:
- Return type:
- Returns:
Signed URL string
Note
Signed URLs allow temporary access to private GCS objects without requiring GCP credentials. Requires service account credentials.
- async copy(source, destination)[source]¶
Copy a file within the storage backend.
Uses GCS’s native copy operation for efficiency.
- Parameters:
- Return type:
- Returns:
StoredFile metadata for the new copy
- Raises:
StorageFileNotFoundError – If the source file does not exist
StorageError – If the copy fails
- async move(source, destination)[source]¶
Move/rename a file within the storage backend.
Uses GCS’s copy + delete operations (no native move).
- Parameters:
- Return type:
- Returns:
StoredFile metadata for the moved file
- Raises:
StorageFileNotFoundError – If the source file does not exist
StorageError – If the move fails
- async info(key)[source]¶
Get metadata about a file without downloading it.
- Parameters:
key (
str) – Storage path/key for the file- Return type:
- Returns:
StoredFile with metadata
- Raises:
StorageFileNotFoundError – If the file does not exist
StorageError – If retrieving metadata fails
- async start_multipart_upload(key, *, content_type=None, metadata=None, part_size=5242880)[source]¶
Start a multipart upload.
Use this for large files (typically > 100MB) to enable: - Chunked uploads with progress tracking - Better handling of network failures - Memory-efficient streaming uploads
Note
GCS doesn’t have native multipart upload like S3. This implementation buffers parts in memory and uploads them when complete_multipart_upload is called. For true resumable uploads, consider using GCS’s resumable upload API directly.
- Parameters:
- Return type:
MultipartUpload- Returns:
MultipartUpload object to track the upload state
- Raises:
StorageError – If initiating the upload fails
- async upload_part(upload, part_number, data)[source]¶
Upload a single part of a multipart upload.
Note
Parts are buffered in memory until complete_multipart_upload is called.
- Parameters:
- Return type:
- Returns:
ETag (placeholder) of the uploaded part
- Raises:
StorageError – If the part upload fails
- async complete_multipart_upload(upload)[source]¶
Complete a multipart upload.
Combines all buffered parts and uploads the complete file to GCS.
- Parameters:
upload (
MultipartUpload) – The MultipartUpload object with all parts uploaded- Return type:
- Returns:
StoredFile metadata for the completed upload
- Raises:
StorageError – If completing the upload fails
- async abort_multipart_upload(upload)[source]¶
Abort a multipart upload.
This cancels an in-progress multipart upload and frees buffered data. Use this to clean up failed uploads.
- Parameters:
upload (
MultipartUpload) – The MultipartUpload object to abort- Raises:
StorageError – If aborting the upload fails
- Return type:
- async put_large(key, data, *, content_type=None, metadata=None, part_size=10485760, progress_callback=None)[source]¶
Upload a large file using multipart upload.
This is a convenience method that handles the multipart upload process automatically. It splits the data into parts, uploads them, and completes the upload.
- Parameters:
key (
str) – Storage path/key for the filedata (
bytes|AsyncIterator[bytes]) – File contents as bytes or async byte streammetadata (
dict[str,str] |None) – Additional metadata to store with the filepart_size (
int) – Size of each part in bytes (default 10MB)progress_callback (
ProgressCallback|None) – Optional callback for progress updates
- Return type:
- Returns:
StoredFile with metadata about the stored file
- Raises:
StorageError – If the upload fails
Usage Examples¶
Using Application Default Credentials¶
When running on GCP (GCE, GKE, Cloud Run, Cloud Functions), credentials are automatically detected:
from litestar_storages import GCSStorage, GCSConfig
storage = GCSStorage(
config=GCSConfig(
bucket="my-bucket",
project="my-project",
)
)
Using Service Account¶
For local development or non-GCP environments:
storage = GCSStorage(
config=GCSConfig(
bucket="my-bucket",
service_file="/path/to/service-account.json",
)
)
Using Emulator (fake-gcs-server)¶
For local testing without GCP access:
# Start emulator: docker run -p 4443:4443 fsouza/fake-gcs-server
storage = GCSStorage(
config=GCSConfig(
bucket="test-bucket",
api_root="http://localhost:4443",
)
)
With Key Prefix¶
storage = GCSStorage(
config=GCSConfig(
bucket="my-bucket",
prefix="uploads/media/",
)
)
# Key "image.jpg" becomes "uploads/media/image.jpg" in GCS
await storage.put("image.jpg", data)
Signed URLs¶
from datetime import timedelta
# Generate signed URL with default expiry (1 hour)
url = await storage.url("documents/contract.pdf")
# Custom expiry
url = await storage.url(
"documents/contract.pdf",
expires_in=timedelta(days=7),
)
Note
Generating signed URLs requires service account credentials with the
iam.serviceAccounts.signBlob permission.
File Operations¶
# Upload
result = await storage.put(
"reports/monthly.pdf",
pdf_bytes,
content_type="application/pdf",
metadata={"department": "finance"},
)
# Download
data = await storage.get_bytes("reports/monthly.pdf")
# Stream download
async for chunk in storage.get("reports/monthly.pdf"):
process(chunk)
# Check existence
if await storage.exists("reports/monthly.pdf"):
print("File exists")
# Get metadata
info = await storage.info("reports/monthly.pdf")
print(f"Size: {info.size}, Type: {info.content_type}")
# List files
async for file in storage.list("reports/"):
print(f"{file.key}: {file.size} bytes")
# Copy and move
await storage.copy("reports/v1.pdf", "reports/v2.pdf")
await storage.move("reports/draft.pdf", "reports/final.pdf")
# Delete
await storage.delete("reports/old.pdf")
Authentication Methods¶
GCS supports multiple authentication methods:
Application Default Credentials (ADC) - Automatic on GCP
Service Account JSON - Via
service_fileconfigEnvironment Variable - Set
GOOGLE_APPLICATION_CREDENTIALS
For production on GCP, ADC with Workload Identity is recommended.
Resource Cleanup¶
Always close the storage when done to release HTTP sessions:
storage = GCSStorage(config=GCSConfig(bucket="my-bucket"))
try:
# Use storage...
pass
finally:
await storage.close()
When using StoragePlugin,
cleanup is handled automatically on application shutdown.