Storage
Expo Open OTA supports multiple storage solutions for hosting your update assets: Amazon S3, Google Cloud Storage (GCS) and Local File System. This guide will help you set up your storage solution and configure your server to use it.
The environment variables required for each storage solution are listed below, you can set them in a .env file in the root of the project or keep them in a safe place to prepare for deployment.
- Local File System
- Amazon S3
- Google Cloud Storage
This storage solution is not recommended for production use. It is intended for development and testing purposes only. If you really want to use it in production, make sure to not have multiple instances of the server running, as the assets are stored locally and not shared between instances.
To use the local file system as your storage solution, you need to set the STORAGE_MODE and LOCAL_BUCKET_BASE_PATH environment variable to the path where you want to store your assets. The server will create the necessary directories and store the assets in the specified location.
STORAGE_MODE=local
LOCAL_BUCKET_BASE_PATH=/path/to/your/assets
To enable Amazon S3 as your storage solution, you need to set the following environment variables:
STORAGE_MODE=s3
AWS_REGION=your-region
S3_BUCKET_NAME=your-bucket-name
For S3-compatible object storage (e.g., Cloudflare R2, MinIO, DigitalOcean Spaces):
STORAGE_MODE=s3
AWS_REGION=auto
AWS_BASE_ENDPOINT=https://account-id.r2.cloudflarestorage.com
S3_BUCKET_NAME=your-bucket-name
If your are not using AWS IAM roles, you also need to set the following environment variables:
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
You don't need to allow public read access to the assets, as the server will generate pre-signed URLs for the assets for CDN if configured. If CDN is not configured, the server will return the asset directly.
Multi-app bucket sharing
If you want to share a single S3 bucket between multiple applications, you can use the S3_KEY_PREFIX environment variable to namespace all keys under a specific prefix:
S3_KEY_PREFIX=myapp
All objects will be stored under myapp/<branch>/<runtimeVersion>/<updateId>/.... This allows multiple independent Expo Open OTA instances to coexist in the same bucket without conflicts.
A trailing slash is automatically added if omitted (myapp → myapp/).
To enable Google Cloud Storage as your storage solution, set the following environment variables:
STORAGE_MODE=gcs
GCS_BUCKET_NAME=your-bucket-name
GOOGLE_APPLICATION_CREDENTIALS_B64=<base64-encoded service account JSON>
Setting up GCP credentials
- In the GCP Console, go to IAM & Admin > Service Accounts
- Create a new service account (or use an existing one)
- Go to your bucket in Cloud Storage > Buckets, open the Permissions tab
- Click Grant Access, add your service account with the Storage Admin role
- Back in Service Accounts, go to Keys > Add Key > Create new key > JSON
- Encode the downloaded JSON file to base64:
base64 -i /path/to/service-account.json | tr -d '\n'
- Set the output as
GOOGLE_APPLICATION_CREDENTIALS_B64in your.env
The base64 credential is used both for authenticating API calls (read, write, delete objects) and for generating signed URLs.
How asset delivery works
Unlike S3 where you can optionally configure CloudFront as a CDN, GCS uses direct signed URLs for asset delivery. When a client requests an update asset, the server generates a short-lived signed URL (15 minutes) and redirects the client to download the file directly from GCS — the server never proxies the file content itself.
This is automatic when GOOGLE_APPLICATION_CREDENTIALS_B64 is set. No additional CDN configuration is needed.
Permissions
The service account needs at minimum the Storage Admin role on your bucket. This covers:
storage.objects.get— reading objectsstorage.objects.list— listing branches, runtime versions, and updatesstorage.objects.create— uploading new updatesstorage.objects.delete— removing updates- Signed URL generation for asset delivery