Foxglove data management
Foxglove data management provides a centralized platform for storing, organizing, and accessing robotics recordings. Without dedicated tooling, teams often resort to ad-hoc solutions—hunting through cloud storage buckets, emailing bag files, or maintaining custom scripts to locate specific data. Foxglove handles this operational complexity so your team can focus on building robots instead of maintaining data infrastructure.
Core concepts
Devices
Devices represent every physical or simulated robot your organization tracks. When you import data, you associate each recording with a device to organize your data by source.
Devices support custom properties for tracking metadata like firmware version, hardware revision, or deployment location. Filter and search your fleet by these properties to find relevant data quickly.
Recordings
Recordings are the data files captured by your robots. Foxglove indexes recordings by device, time range, and topic, enabling fast queries across your entire data corpus.
Import data as MCAP or ROS 1 bag files. MCAP recordings can include metadata records and attachments for contextual information like configuration files, calibration data, or build artifacts.
Events
Events annotate time ranges in your data. Use them to mark incidents, flag interesting behaviors, or categorize segments for later analysis.
Create events manually during visualization or programmatically via the API. Define event types with structured properties to ensure consistent labeling across your organization.
Data flow
Foxglove manages data through its entire lifecycle including on robot, at the edge, during ingestion, storage in a Primary Site, and access for visualization and analysis.
On robot
Capture data directly on your robots using several approaches:
- Foxglove SDK: Stream live data to a local Foxglove visualizer for real-time debugging, or record data to MCAP files for later analysis.
- ROS 1 bags and ROS 2 bags: Use native ROS tooling to record bag files, then import them into Foxglove.
- Foxglove Bridge: Connect Foxglove directly to a running ROS system for live visualization without recording.
Edge compute
For environments with limited connectivity, Edge Sites provide on-premises storage that stages data locally before forwarding to your Primary Site. Use Edge Sites in warehouses with spotty WiFi, field deployments with cellular constraints, or facilities requiring local data review before cloud upload.
Ingestion
Move recordings from robots into Foxglove through multiple channels:
- Foxglove Agent: Runs on robots and monitors directories for new recordings. Ideal for autonomous fleets that upload data without manual intervention. Handles unreliable connectivity with automatic upload resumption.
- CLI: Script uploads from build systems, CI pipelines, or batch processing workflows.
- API: Integrate uploads into custom tooling using the REST API or client SDKs.
- Web UI: Drag-and-drop uploads for ad-hoc debugging sessions.
Storage
All recordings flow into a Primary Site, which indexes data by device, time range, and topic for fast queries. See deployment options for details on Foxglove-hosted, customer-hosted, and fully offline configurations.
Access
Retrieve and analyze your data through multiple interfaces:
- Visualization: Open recordings directly in Foxglove to inspect sensor data, replay scenarios, and debug issues.
- Export: Download recordings as MCAP or ROS 1 bag files. Export entire recordings, specific time ranges, or selected topics.
- API queries: Access data programmatically for third-party tools like Jupyter notebooks or custom analysis pipelines.
Timeline
The Timeline page provides a visual overview of recordings across devices and time. Use it to locate data, identify coverage gaps, and select time ranges for export or visualization.
Deployment options
Foxglove supports three deployment models to meet different organizational requirements.
Foxglove-hosted
The default option. Foxglove manages all infrastructure for storage, indexing, and serving data. No setup required—sign up and start uploading.
Customer-hosted Primary Sites
Primary Sites store data in your own cloud infrastructure (AWS, GCP, Azure, or S3-compatible storage) while Foxglove handles authentication and coordination. Only metadata reaches Foxglove servers; message contents stay in your environment.
Use this when your security or compliance requirements prevent sending data to third-party infrastructure.
Full Offline
Deploy entirely within your infrastructure with no external network dependencies. Requires managing Kubernetes, PostgreSQL, and authentication, but provides complete autonomy.
Use this for air-gapped environments where no external network access is permitted.
Automation
Webhooks
Webhooks notify your systems when events occur in Foxglove. Subscribe to notifications for:
- New recordings created or imported
- New devices added
- New events created
Use webhooks to trigger CI pipelines when test recordings arrive, update internal dashboards, or integrate with existing data processing workflows.
CLI and API
The Foxglove CLI and REST API enable scriptable workflows:
# Import a recording
foxglove data import recording.mcap --device-id dev_abc123
# Export a time range
foxglove data export \
--device-id dev_abc123 \
--start 2025-01-01T00:00:00Z \
--end 2025-01-01T01:00:00Z \
--output-format mcap0 > output.mcap
# Create an event
foxglove events add \
--device-id dev_abc123 \
--timestamp 2025-01-01T00:30:00Z \
--duration-nanos 60000000000 \
--metadata 'incident:collision'
Why use Foxglove data management
Unified access: Query data across your entire fleet from a single interface. Find recordings by device, time, topic, or custom metadata without building bespoke tooling.
Scalable storage: Handle petabytes of sensor data without managing storage infrastructure. Foxglove indexes data for fast retrieval regardless of corpus size.
Flexible deployment: Choose the deployment model that fits your security and compliance requirements. Keep data in your own infrastructure if needed.
Team collaboration: Share recordings and events across your organization. Create standardized event types and device properties for consistent labeling. Operations, QA, and support teams can access data directly without requiring engineering assistance.
Integration-ready: Connect to existing pipelines via webhooks, CLI, or API. Export data in standard formats for analysis in third-party tools.