Metrics & Log Exports
FoundryDB can push metrics and database logs from any service to your existing observability stack. Once configured, exports run on a schedule with no agents or sidecars required.
How It Works
Each export integration is a push-based worker attached to a single service. At the configured interval the controller collects the latest metric samples and log entries, then ships them directly to the destination over HTTPS. There is nothing to install on your end beyond a valid API key or endpoint URL.
Key properties:
- Minimum export interval: 30 seconds. Default: 60 seconds.
data_typecontrols what is sent:metrics,logs, orboth.- An integration is enabled by default when created. It can be paused without deleting it.
- The controller tracks
consecutive_failuresandlast_export_errorso you can detect broken integrations without polling logs.
Supported Destinations
Datadog
Metrics are sent to the Datadog Metrics v2 API. Logs are sent to the Datadog Logs Intake API. All data arrives as gauge time series and structured log events respectively, tagged with service_id, db_type, node_id, and service_name.
Required configuration fields:
| Field | Required | Description |
|---|---|---|
api_key | Yes | Datadog API key |
site | No | Datadog site (default: datadoghq.com). Other values: datadoghq.eu, us3.datadoghq.com, us5.datadoghq.com, ap1.datadoghq.com |
tags | No | Key/value map of static tags to attach to every metric and log |
Example:
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Production Datadog",
"destination_type": "datadog",
"data_type": "both",
"export_interval_seconds": 60,
"configuration": {
"api_key": "dd1ab2cd3ef4gh5ij6kl7mn8op9qr0st",
"site": "datadoghq.com",
"tags": {
"env": "production",
"team": "platform"
}
}
}'
Prometheus Remote Write
Metrics are serialised as a Prometheus WriteRequest protobuf payload (snappy-compressed) and sent to any Prometheus-compatible remote write endpoint, including Grafana Cloud, Thanos, Cortex, and Mimir. Log entries are not forwarded because Prometheus Remote Write is a metrics-only protocol.
Required configuration fields:
| Field | Required | Description |
|---|---|---|
url | Yes | Remote write endpoint URL |
username | No | HTTP basic auth username (mutually exclusive with bearer_token) |
password | No | HTTP basic auth password (required when username is set) |
bearer_token | No | Bearer token for Authorization header |
tls_skip_verify | No | Skip TLS certificate verification (default: false) |
Example (Grafana Cloud):
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Grafana Cloud Metrics",
"destination_type": "prometheus_remote_write",
"data_type": "metrics",
"export_interval_seconds": 60,
"configuration": {
"url": "https://prometheus-prod-01-eu-west-0.grafana.net/api/prom/push",
"username": "123456",
"password": "glc_eyJvIjoiMTIzNDU2IiwibiI6ImZvdW5kcnlkYiIsImsiOiJleGFtcGxlIn0="
}
}'
Metric names are exposed as foundrydb_<metric_type>, with labels service_id, db_type, node_id, and service_name.
OTLP / Grafana Cloud
The OTLP exporter sends metrics to {endpoint}/v1/metrics and logs to {endpoint}/v1/logs in OTLP JSON format. It is compatible with any OpenTelemetry Collector endpoint, Grafana Cloud OTLP, New Relic, Honeycomb, and others.
Required configuration fields:
| Field | Required | Description |
|---|---|---|
endpoint | Yes | Base URL of the OTLP endpoint (without /v1/metrics suffix) |
headers | No | Key/value map of HTTP headers (for authentication tokens) |
protocol | No | http or grpc (default: grpc). Note: only HTTP is used for the push transport |
insecure | No | Skip TLS verification (default: false) |
timeout_seconds | No | Per-request timeout in seconds (default: 30) |
Example (Grafana Cloud OTLP):
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Grafana Cloud OTLP",
"destination_type": "otlp",
"data_type": "both",
"export_interval_seconds": 60,
"configuration": {
"endpoint": "https://otlp-gateway-prod-eu-west-0.grafana.net/otlp",
"headers": {
"Authorization": "Basic MTIzNDU2OmdsY19leGFtcGxldG9rZW4="
}
}
}'
Resource attributes on each export include service.name (foundrydb), foundrydb.service_id, foundrydb.node_id, foundrydb.db_type, and foundrydb.service_name.
Elasticsearch / OpenSearch
Both metrics and logs are written to Elasticsearch (or OpenSearch) via the Bulk API. Metrics land in <index_prefix>-foundrydb-metrics and logs in <index_prefix>-foundrydb-logs. Authentication supports either an API key or basic credentials.
Required configuration fields:
| Field | Required | Description |
|---|---|---|
endpoint | Yes | Base URL of the cluster (e.g. https://my-cluster.es.io:9243) |
api_key | No | Elasticsearch API key (mutually exclusive with username/password) |
username | No | Basic auth username |
password | No | Basic auth password (required when username is set) |
index_prefix | No | Prefix for index names (default: foundrydb) |
tls_skip_verify | No | Skip TLS verification (default: false) |
Example:
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Elastic Cloud Production",
"destination_type": "elasticsearch",
"data_type": "both",
"export_interval_seconds": 120,
"configuration": {
"endpoint": "https://my-deployment-abc123.es.us-east-1.aws.elastic-cloud.com:9243",
"api_key": "dGVzdC1pZDp0ZXN0LWtleQ==",
"index_prefix": "foundrydb-prod"
}
}'
Grafana Loki
Log entries are pushed to the Loki push API. Metric samples are silently skipped because Loki is a log-only destination. Entries are grouped into streams by (service_id, db_type, level).
Required configuration fields:
| Field | Required | Description |
|---|---|---|
endpoint | Yes | Loki base URL (e.g. https://logs-prod-eu-west-0.grafana.net) |
username | No | Basic auth username (Grafana Cloud user ID) |
password | No | Basic auth password (Grafana Cloud API key) |
bearer_token | No | Bearer token for Authorization header (mutually exclusive with username) |
labels | No | Key/value map of static labels added to every stream |
Example (Grafana Cloud Loki):
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Loki Production Logs",
"destination_type": "loki",
"data_type": "logs",
"export_interval_seconds": 30,
"configuration": {
"endpoint": "https://logs-prod-eu-west-0.grafana.net",
"username": "789012",
"password": "glc_eyJleGFtcGxlIjoibG9raSJ9",
"labels": {
"env": "production"
}
}
}'
Each log line is sent as a JSON object containing message, node_id, and any structured metadata fields.
BetterStack
Log entries are sent to the BetterStack Logs ingestion endpoint. Metric samples are silently skipped because BetterStack does not provide a metrics ingestion API.
Required configuration fields:
| Field | Required | Description |
|---|---|---|
source_token | Yes | BetterStack source token |
endpoint | No | Override ingestion URL (default: https://in.logs.betterstack.com) |
Example:
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "BetterStack Logs",
"destination_type": "betterstack",
"data_type": "logs",
"export_interval_seconds": 60,
"configuration": {
"source_token": "aBcDeFgHiJkLmNoPqRsTuVwXyZ123456"
}
}'
AWS CloudWatch
Metrics are sent to CloudWatch Metrics via PutMetricData and logs are written to CloudWatch Logs via PutLogEvents. The exporter authenticates using AWS Signature Version 4 and requires an IAM user or role with the appropriate permissions.
Required configuration fields:
| Field | Required | Description |
|---|---|---|
region | Yes | AWS region (e.g. eu-west-1) |
access_key_id | Yes | AWS access key ID |
secret_access_key | Yes | AWS secret access key |
namespace | No | CloudWatch metrics namespace (default: FoundryDB) |
log_group_name | No | CloudWatch Logs group name (default: /foundrydb/logs). Created automatically if it does not exist. |
Minimum IAM permissions required:
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"cloudwatch:ListMetrics",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
Example:
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "CloudWatch Production",
"destination_type": "cloudwatch",
"data_type": "both",
"export_interval_seconds": 60,
"configuration": {
"region": "eu-west-1",
"access_key_id": "AKIAIOSFODNN7EXAMPLE",
"secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"namespace": "FoundryDB/Production",
"log_group_name": "/foundrydb/production/logs"
}
}'
Metric names follow the pattern foundrydb_<metric_type>. Dimensions include ServiceID, DBType, NodeID, and ServiceName.
Managing Exports
List integrations
curl -u admin:password \
"https://api.foundrydb.com/api/v1/metrics-exports?service_id=b2e1c3d4-0000-0000-0000-aabbccddeeff"
Optional query parameters:
| Parameter | Description |
|---|---|
service_id | Filter by service UUID |
destination_type | Filter by destination (e.g. datadog) |
is_enabled | true or false |
limit / offset | Pagination |
Get a single integration
curl -u admin:password \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}
Update an integration
All fields are optional. Only the fields you provide are updated.
curl -u admin:password -X PUT \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId} \
-H "Content-Type: application/json" \
-d '{
"export_interval_seconds": 300,
"data_type": "metrics"
}'
Enable and disable
# Pause exports without deleting the integration
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}/disable
# Resume
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}/enable
Test connectivity
Verifies that the destination is reachable and the credentials are valid without sending real data.
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}/test
Response:
{ "success": true }
Or, on failure:
{ "success": false, "error": "datadog API key is invalid (HTTP 403)" }
Delete an integration
curl -u admin:password -X DELETE \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}
Returns 204 No Content on success.
Data Reference
Metric fields
Each metric sample exported to a destination carries the following attributes:
| Field | Description |
|---|---|
service_id | UUID of the FoundryDB service |
service_name | Human-readable service name |
db_type | Database engine (postgresql, mysql, mongodb, valkey, kafka) |
node_id | UUID of the individual VM node |
metric_type | Metric name (see Monitoring for the full list) |
value | Floating-point metric value |
timestamp | UTC timestamp of the measurement |
Metric names are prefixed with foundrydb. when sent to Datadog and OTLP destinations, and with foundrydb_ for Prometheus Remote Write. In CloudWatch the prefix is foundrydb_ with underscores.
Log fields
Each log entry carries:
| Field | Description |
|---|---|
service_id | UUID of the FoundryDB service |
service_name | Human-readable service name |
db_type | Database engine |
node_id | UUID of the VM that produced the log |
occurred_at | UTC timestamp of the log event |
level | Severity (debug, info, warn, error) |
message | Log message text |
metadata | Structured key/value fields (query duration, error codes, etc.) |
Troubleshooting
Integration shows consecutive_failures > 0
Retrieve the integration and inspect last_export_error:
curl -u admin:password \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId} \
| jq '{last_export_error, consecutive_failures, last_export_at}'
Then run a connectivity test to get an immediate error message from the destination.
Common errors
| Error | Cause | Fix |
|---|---|---|
API key is invalid (HTTP 403) | Wrong or expired credential | Rotate the key and update the integration via PUT |
connectivity check failed: connection refused | Wrong endpoint URL or firewall rule | Verify the endpoint is reachable from the internet |
prometheus remote write credentials rejected (HTTP 401) | Incorrect username or bearer token | Check the credentials in your Grafana Cloud settings |
elasticsearch credentials rejected (HTTP 401) | Expired API key or wrong basic auth | Re-generate the API key in Kibana or Elastic Cloud |
loki push returned HTTP 400 | Malformed stream labels or out-of-order timestamps | Check that the labels map does not contain characters disallowed by Loki |
export_interval_seconds must be at least 30 | Interval set below the minimum | Use a value of 30 or higher |
Destination-specific notes
Datadog: If no data appears in Datadog, confirm the API key has the metrics_write and logs_write scopes. Use the correct site value for your Datadog region.
Prometheus Remote Write: The exporter sets X-Prometheus-Remote-Write-Version: 0.1.0 and uses snappy compression. Ensure your remote write receiver accepts this version.
CloudWatch: Log streams are named foundrydb-YYYY-MM-DD and rotate daily. The IAM policy must cover both monitoring.* and logs.* actions. Newly created log groups may take a few seconds to appear in the console.
Elasticsearch / OpenSearch: Documents are sent using the Bulk API (/_bulk). Verify the index template allows the @timestamp, value, and tags fields. If using OpenSearch with fine-grained access control enabled, the API key must have indices:data/write/bulk permission on the target index patterns.
BetterStack: Only logs data type is useful here. Setting data_type to metrics or both will result in no data being forwarded for the metrics portion since BetterStack does not accept metric payloads.
Loki: Only logs data type is useful here. Metric samples are silently discarded. Loki requires timestamps to be in nanoseconds; the exporter handles this automatically.