Skip to main content

Storage mode

A self-hosted primary site supports two data storage modes. The choice of mode affects query performance, data lifecycle management, and operational overhead. The site's storage mode is selected at creation time, and cannot be changed.

Query-optimized

Query-optimized mode is the default mode. This mode provides the best query performance with minimal operational overhead.

In this mode:

  1. A recording file is uploaded to an inbox bucket.
  2. The inbox listener transcodes the data into a query-optimized format (only rearranging and compressing, never altering the data itself).
  3. The optimized data is written to the lake bucket.
  4. The original file in the inbox is deleted.

Once a recording has been imported into the lake bucket, its lifecycle is managed through Foxglove. Do not attempt to modify objects in the lake bucket directly.

Index-in-place

The index-in-place storage mode is intended for situations in which the user wants to retain their original recording files, and manage their lifecycle out-of-band.

In this mode:

  1. The user uploads recordings to a storage bucket.
  2. The indexer reads metadata from the recording, and uploads that index to Foxglove.
  3. Foxglove then queries data directly from the uploaded file.

Recording lifecycle is managed by the user instead of through Foxglove. To remove a recording, delete the file from the storage bucket.

In this mode, Foxglove will only read data from MCAP files with Chunk Indexes, or ROS 1 .bag files with an index section.

tip

If a recording does not have an index section, you can rebuild the index with mcap recover or rosbag reindex.

It is important to prepare files to ensure good query performance before uploading:

  • For MCAP files, chunk compression using lz4 or zstd is recommended.
  • For ROS 1 .bag files, use lz4 compression. bz2 compression is not recommended.
  • Use small chunk sizes (<1 MiB) to minimize query latency.
  • Partition high bitrate topics (e.g., images, point clouds) into separate files. Because compression happens at the chunk level, mixing high- and low-bitrate topics in the same chunk forces queries for low-bitrate data to also fetch and decompress the high-bitrate data. Separating topics avoids this overhead.
tip

The mcap info command includes a "compression" section in its output, which indicates the average bitrate required in order to stream data from the file in real-time. The mcap du command can be used to identify high bitrate topics, and the mcap filter command can be used to extract data from particular topics.