Manage data
Manage your self-hosted Primary Site's data.
Uploading data
To import data to Foxglove, self-hosted users must upload their recordings to their configured inbox bucket. Once acknowledged, the pending import will appear on the Recordings page. Once processed, its data will be available via the API and CLI.
Naming
Recordings uploaded to the inbox bucket must not begin with _quarantine/
. This prefix is reserved by the inbox listener to quarantine files which cannot be processed. See Quarantined files below.
Prevent duplication with idempotency keys
Foxglove's upload APIs use an idempotency mechanism to ensure recordings are processed only once:
- Internally, Foxglove creates a unique idempotency key to avoid duplicates after reception.
- For recordings associated with a device, Foxglove will verify that its content hash doesn't match any other recording already processed for that device before indexing it. For recordings without an associated device, users can provide a unique
key
parameter to help Foxglove avoids processing the same upload more than once. Data files for duplicate requests are written to a deterministic location and thus require no cleanup. - If two recordings with the same content are uploaded with the same
key
parameter, they will be de-duplicated. - If a recording is uploaded with different content but the same key as an existing recording, it will fail to import.
- If two recordings with the same content are uploaded with different
key
parameters and no device, they will both be imported.
Adding metadata to imports
Associate your recording with metadata information for more self-contained files:
- Object metadata - For ROS bag files
- MCAP metadata – For MCAP files
Both types of metadata support the following keys:
- Device ID – ID of device associated with the recording (must match an existing Foxglove device)
- Key – Idempotency key associated with the recording
If a device ID is specified in both MCAP metadata and object metadata, object metadata will take precedence.
MCAP metadata also supports the following:
- Device name – Name of device associated with the recording
The device name can be an internal identifier and must be unique within your organization. If a recording specifies a device name which does not yet exist in Foxglove, the device will be automatically created during the import process.
If an MCAP file has more than one metadata record with name="foxglove"
, the file's last record will take precedence.
Object metadata
Add object metadata to your files using the following key names:
foxglove_device_id
foxglove_key
To ensure that your file is not read before your metadata is set, write the file and set the metadata in the same operation.
MCAP metadata
Add MCAP metadata to your files using foxglove
as the metadata's name
, and the following key names:
deviceName
deviceId
key
An example using the MCAP CLI:
mcap add metadata your_file.mcap --name foxglove --key deviceId=dev_abc123
Cloud CLI uploads
You can use the command line to upload objects with metadata to various cloud SDKs. Adapt the following examples to your team's unique needs.
Microsoft Azure
az storage blob upload -f ~/data/bags/gps.bag --container-name inbox --account-name yourorgfgstorage -n gps.bag --overwrite --metadata foxglove_device_id=dev_03ooHzt1GRRdnGrP
Google Cloud Storage
gsutil -h "x-goog-meta-foxglove_device_id:<your device id>" cp <input.bag> gs://<your inbox bucket>/<path>
Amazon S3
aws s3 cp input.bag s3://<inbox-bucket>/<path> --metadata '{"foxglove_device_id": "<your device ID>"}'
Quarantined files
During the import process, if the recording file is invalid, or if there are processing errors after multiple retries, then the recording will be quarantined with a "_quarantine/" prefix in the inbox bucket.
If an inbox object is quarantined, you will see a corresponding Import Error in Foxglove, and an "error" status in the "list pending imports" API endpoint. You can retry a quarantined import from the Import Errors screen in Foxglove, or by using the "retry pending import" API endpoint.
Backing up data
If desired, you can back up your data from the lake bucket to your own cloud storage solution, however this is generally not recommended. The lake bucket is designed to be durable and long-lived, and backing up the data could result in higher costs.
To backup the data in your lake bucket, you can set your object store lake bucket to replicate to a different location in your own cloud provider or on-premises storage. On AWS, you can use S3 cross-region replication.