Foxglove Cloud, bring your own storage
Set up a Foxglove Cloud Primary Site that reads data directly from your own cloud storage bucket. Foxglove manages all processing infrastructure — you provide the bucket and grant access.
GCP and Azure support is in development. AWS is available now.
Overview
With Foxglove Cloud bring-your-own-storage, your recordings stay in your cloud storage while Foxglove operates the indexing, querying, and streaming services in the same cloud region. There's no Kubernetes cluster to deploy or Helm chart to manage.
The setup process:
- Create a storage bucket (or use an existing one)
- Create a site in the Foxglove app, selecting your cloud provider and region
- Grant the Foxglove-provided service account read access to your bucket
- Configure bucket notifications so Foxglove knows when new files arrive
- Verify the connection
How it works
Bring-your-own-storage sites use index-in-place storage. Once bucket notifications are configured, the flow for each recording is:
- You upload a file to your bucket, which triggers a notification to Foxglove.
- Foxglove's indexer reads metadata from the file — time ranges, topics, and channel information — and stores that index.
- When you visualize or query data, Foxglove's query service reads directly from the original file in your bucket.
The original recordings remain the only copy of your data. Foxglove never transcodes, duplicates, or moves your files. Recording lifecycle is entirely yours — delete a file from your bucket and Foxglove's index updates automatically.
Each site gets a unique service account scoped to that site alone. You grant bucket access to this specific account, not to a shared Foxglove identity.
Metadata sent to Foxglove
At index time, Foxglove extracts and stores metadata such as time ranges, topic names, and channel information. At query time, Foxglove's query service reads message data directly from your bucket — message contents, sensor data, and attachments are never copied to Foxglove storage. See the Primary Sites FAQ (applicable to all deployment models) for the full list of metadata fields.
Prerequisites
- A cloud provider account (AWS, GCP, or Azure)
- Organization admin access in Foxglove
Setup
Create a storage bucket
Create a cloud storage bucket for your recordings, or use an existing one.
Foxglove reads directly from your original files, so file layout affects query performance. Before uploading:
- Use
lz4orzstdchunk compression for MCAP files; uselz4for ROS 1.bagfiles (bz2is not recommended) - Keep chunk sizes small (< 1 MiB) to minimize query latency
- Partition high-bitrate topics (for example, images, point clouds) into separate files from low-bitrate telemetry
See storage mode for more details.
Create a site
- Go to the Sites settings page
- Click Create site
- Select your cloud provider (AWS, GCP, or Azure)
- Enter your bucket name and region
- AWS
- GCP
- Azure
No additional fields are required.
GCP support is in development.
No additional fields are required.
Azure support is in development.
Provide your storage account name in addition to the container name.
- Click Create
After creation, the site details page displays the identifiers and ready-to-run CLI commands you need for the next two steps. Enter your values below and the commands on this page will update automatically:
- AWS
- GCP
- Azure
Bucket name:
IAM principal ARN:
SNS topic ARN:
Bucket name:
Service account:
Pub/Sub topic:
Service principal ID:
Subscription ID:
Resource group:
Storage account:
Webhook endpoint:
Grant bucket access
Grant the site's service account read access to your storage bucket.
- AWS
- GCP
- Azure
The service account is an IAM role ARN. Add a bucket policy granting it read access:
aws s3api put-bucket-policy --bucket <your-bucket> --policy '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": "<site-role-arn>" },
"Action": ["s3:GetObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::<your-bucket>", "arn:aws:s3:::<your-bucket>/*"]
}
]
}'
Or apply through the S3 Console under your bucket's Permissions > Bucket policy section.
GCP support is in development.
The service account is a Workload Identity Federation principal. Grant it the Storage Object Viewer (roles/storage.objectViewer) role on your bucket:
gcloud storage buckets add-iam-policy-binding gs://<your-bucket> \
--member=serviceAccount:<site-service-account> \
--role=roles/storage.objectViewer
Or add the IAM binding through the Google Cloud Console under your bucket's Permissions tab.
Azure support is in development.
Azure access uses a multi-tenant Entra ID application with federated credentials. To configure access:
- Consent to the Foxglove application in your Entra ID tenant. This creates a service principal in your tenant for the Foxglove app. Follow the consent link shown on the site details page.
- Assign the
Storage Blob Data Readerrole to the Foxglove service principal on your storage account:
az role assignment create \
--assignee <foxglove-service-principal-id> \
--role "Storage Blob Data Reader" \
--scope /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>
Or configure through the Azure Portal under your storage account's Access Control (IAM) section.
Configure bucket notifications
Foxglove needs to know when new files arrive in your bucket. The notification mechanism varies by cloud provider.
- AWS
- GCP
- Azure
Configure S3 Event Notifications on your bucket for s3:ObjectCreated:* events, pointing to the Foxglove SNS topic ARN shown on your site's details page:
aws s3api put-bucket-notification-configuration --bucket <your-bucket> --notification-configuration '{
"TopicConfigurations": [
{
"TopicArn": "<foxglove-sns-topic-arn>",
"Events": ["s3:ObjectCreated:*"]
}
]
}'
Foxglove's SNS topic policy already permits your bucket to publish to it. You can also configure this through the S3 Console under Properties > Event notifications.
GCP support is in development.
Create a Pub/Sub notification on your bucket that sends to the Foxglove Pub/Sub topic:
gcloud storage notifications create gs://<your-bucket> \
--topic=<foxglove-pubsub-topic> \
--event-types=OBJECT_FINALIZE
The Foxglove Pub/Sub topic is displayed on your site's details page. Foxglove automatically grants your Cloud Storage service agent publish access to this topic during site provisioning.
Azure support is in development.
Create an Event Grid system topic on your storage account and add a subscription that delivers Blob Created events to the Foxglove webhook endpoint shown on your site's details page.
az eventgrid event-subscription create \
--name foxglove-notifications \
--source-resource-id /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account> \
--included-event-types Microsoft.Storage.BlobCreated \
--endpoint <foxglove-webhook-endpoint>
Azure Event Grid retries webhook delivery for up to 24 hours before dropping events. For stronger guarantees, configure a dead-letter container on your Event Grid subscription.
You can also notify Foxglove of new files directly via the /site-bucket-notifications API endpoint. This is useful if you want to trigger indexing from a custom pipeline rather than relying on cloud-native bucket events.
Verify the connection
After completing setup, the site details page shows the connection status:
- Bucket access — Foxglove periodically checks that the service account can read from your bucket. Once access is confirmed, the status updates automatically.
- Bucket notifications — Upload or update a test file in your bucket. Once Foxglove receives the notification, the status updates.
If either check fails, review the access grants and notification configuration above.
Manage data
Uploading recordings
Upload recordings directly to your storage bucket using your cloud provider's tools (for example, aws s3 cp, gcloud storage cp, or az storage blob upload). Once the bucket notification fires, Foxglove indexes the file and it appears on the Recordings page.
You can associate metadata with recordings using object metadata or MCAP metadata records. See the Primary Site metadata documentation (applicable to all deployment models) for details on supported keys and formats.
Deleting recordings
Delete recording files directly from your storage bucket. The indexer periodically scans object storage and automatically removes recordings whose underlying files no longer exist.
Site deletion
When you delete a bring-your-own-storage site, Foxglove tears down the associated compute resources. You're responsible for:
- Revoking the bucket access grant for the site's service account
- Removing bucket notification configuration
Supported file formats
Bring-your-own-storage sites support the same file formats as index-in-place on-premises sites:
- MCAP files with chunk indexes
- ROS 1
.bagfiles with an index section
If a recording lacks an index section, rebuild it with mcap recover or rosbag reindex.
Other deployment models
For a full comparison of all deployment models — including Foxglove Cloud and on-premises — see choosing a deployment model.
Support
For assistance, contact [email protected].