Skip to main content

Metrics & Log Exports

FoundryDB can push metrics and database logs from any service to your existing observability stack. Once configured, exports run on a schedule with no agents or sidecars required.

How It Works

Each export integration is a push-based worker attached to a single service. At the configured interval the controller collects the latest metric samples and log entries, then ships them directly to the destination over HTTPS. There is nothing to install on your end beyond a valid API key or endpoint URL.

Key properties:

  • Minimum export interval: 30 seconds. Default: 60 seconds.
  • data_type controls what is sent: metrics, logs, or both.
  • An integration is enabled by default when created. It can be paused without deleting it.
  • The controller tracks consecutive_failures and last_export_error so you can detect broken integrations without polling logs.

Supported Destinations

Datadog

Metrics are sent to the Datadog Metrics v2 API. Logs are sent to the Datadog Logs Intake API. All data arrives as gauge time series and structured log events respectively, tagged with service_id, db_type, node_id, and service_name.

Required configuration fields:

FieldRequiredDescription
api_keyYesDatadog API key
siteNoDatadog site (default: datadoghq.com). Other values: datadoghq.eu, us3.datadoghq.com, us5.datadoghq.com, ap1.datadoghq.com
tagsNoKey/value map of static tags to attach to every metric and log

Example:

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Production Datadog",
"destination_type": "datadog",
"data_type": "both",
"export_interval_seconds": 60,
"configuration": {
"api_key": "dd1ab2cd3ef4gh5ij6kl7mn8op9qr0st",
"site": "datadoghq.com",
"tags": {
"env": "production",
"team": "platform"
}
}
}'

Prometheus Remote Write

Metrics are serialised as a Prometheus WriteRequest protobuf payload (snappy-compressed) and sent to any Prometheus-compatible remote write endpoint, including Grafana Cloud, Thanos, Cortex, and Mimir. Log entries are not forwarded because Prometheus Remote Write is a metrics-only protocol.

Required configuration fields:

FieldRequiredDescription
urlYesRemote write endpoint URL
usernameNoHTTP basic auth username (mutually exclusive with bearer_token)
passwordNoHTTP basic auth password (required when username is set)
bearer_tokenNoBearer token for Authorization header
tls_skip_verifyNoSkip TLS certificate verification (default: false)

Example (Grafana Cloud):

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Grafana Cloud Metrics",
"destination_type": "prometheus_remote_write",
"data_type": "metrics",
"export_interval_seconds": 60,
"configuration": {
"url": "https://prometheus-prod-01-eu-west-0.grafana.net/api/prom/push",
"username": "123456",
"password": "glc_eyJvIjoiMTIzNDU2IiwibiI6ImZvdW5kcnlkYiIsImsiOiJleGFtcGxlIn0="
}
}'

Metric names are exposed as foundrydb_<metric_type>, with labels service_id, db_type, node_id, and service_name.


OTLP / Grafana Cloud

The OTLP exporter sends metrics to {endpoint}/v1/metrics and logs to {endpoint}/v1/logs in OTLP JSON format. It is compatible with any OpenTelemetry Collector endpoint, Grafana Cloud OTLP, New Relic, Honeycomb, and others.

Required configuration fields:

FieldRequiredDescription
endpointYesBase URL of the OTLP endpoint (without /v1/metrics suffix)
headersNoKey/value map of HTTP headers (for authentication tokens)
protocolNohttp or grpc (default: grpc). Note: only HTTP is used for the push transport
insecureNoSkip TLS verification (default: false)
timeout_secondsNoPer-request timeout in seconds (default: 30)

Example (Grafana Cloud OTLP):

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Grafana Cloud OTLP",
"destination_type": "otlp",
"data_type": "both",
"export_interval_seconds": 60,
"configuration": {
"endpoint": "https://otlp-gateway-prod-eu-west-0.grafana.net/otlp",
"headers": {
"Authorization": "Basic MTIzNDU2OmdsY19leGFtcGxldG9rZW4="
}
}
}'

Resource attributes on each export include service.name (foundrydb), foundrydb.service_id, foundrydb.node_id, foundrydb.db_type, and foundrydb.service_name.


Elasticsearch / OpenSearch

Both metrics and logs are written to Elasticsearch (or OpenSearch) via the Bulk API. Metrics land in <index_prefix>-foundrydb-metrics and logs in <index_prefix>-foundrydb-logs. Authentication supports either an API key or basic credentials.

Required configuration fields:

FieldRequiredDescription
endpointYesBase URL of the cluster (e.g. https://my-cluster.es.io:9243)
api_keyNoElasticsearch API key (mutually exclusive with username/password)
usernameNoBasic auth username
passwordNoBasic auth password (required when username is set)
index_prefixNoPrefix for index names (default: foundrydb)
tls_skip_verifyNoSkip TLS verification (default: false)

Example:

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Elastic Cloud Production",
"destination_type": "elasticsearch",
"data_type": "both",
"export_interval_seconds": 120,
"configuration": {
"endpoint": "https://my-deployment-abc123.es.us-east-1.aws.elastic-cloud.com:9243",
"api_key": "dGVzdC1pZDp0ZXN0LWtleQ==",
"index_prefix": "foundrydb-prod"
}
}'

Grafana Loki

Log entries are pushed to the Loki push API. Metric samples are silently skipped because Loki is a log-only destination. Entries are grouped into streams by (service_id, db_type, level).

Required configuration fields:

FieldRequiredDescription
endpointYesLoki base URL (e.g. https://logs-prod-eu-west-0.grafana.net)
usernameNoBasic auth username (Grafana Cloud user ID)
passwordNoBasic auth password (Grafana Cloud API key)
bearer_tokenNoBearer token for Authorization header (mutually exclusive with username)
labelsNoKey/value map of static labels added to every stream

Example (Grafana Cloud Loki):

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "Loki Production Logs",
"destination_type": "loki",
"data_type": "logs",
"export_interval_seconds": 30,
"configuration": {
"endpoint": "https://logs-prod-eu-west-0.grafana.net",
"username": "789012",
"password": "glc_eyJleGFtcGxlIjoibG9raSJ9",
"labels": {
"env": "production"
}
}
}'

Each log line is sent as a JSON object containing message, node_id, and any structured metadata fields.


BetterStack

Log entries are sent to the BetterStack Logs ingestion endpoint. Metric samples are silently skipped because BetterStack does not provide a metrics ingestion API.

Required configuration fields:

FieldRequiredDescription
source_tokenYesBetterStack source token
endpointNoOverride ingestion URL (default: https://in.logs.betterstack.com)

Example:

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "BetterStack Logs",
"destination_type": "betterstack",
"data_type": "logs",
"export_interval_seconds": 60,
"configuration": {
"source_token": "aBcDeFgHiJkLmNoPqRsTuVwXyZ123456"
}
}'

AWS CloudWatch

Metrics are sent to CloudWatch Metrics via PutMetricData and logs are written to CloudWatch Logs via PutLogEvents. The exporter authenticates using AWS Signature Version 4 and requires an IAM user or role with the appropriate permissions.

Required configuration fields:

FieldRequiredDescription
regionYesAWS region (e.g. eu-west-1)
access_key_idYesAWS access key ID
secret_access_keyYesAWS secret access key
namespaceNoCloudWatch metrics namespace (default: FoundryDB)
log_group_nameNoCloudWatch Logs group name (default: /foundrydb/logs). Created automatically if it does not exist.

Minimum IAM permissions required:

{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"cloudwatch:ListMetrics",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}

Example:

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports \
-H "Content-Type: application/json" \
-d '{
"service_id": "b2e1c3d4-0000-0000-0000-aabbccddeeff",
"name": "CloudWatch Production",
"destination_type": "cloudwatch",
"data_type": "both",
"export_interval_seconds": 60,
"configuration": {
"region": "eu-west-1",
"access_key_id": "AKIAIOSFODNN7EXAMPLE",
"secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"namespace": "FoundryDB/Production",
"log_group_name": "/foundrydb/production/logs"
}
}'

Metric names follow the pattern foundrydb_<metric_type>. Dimensions include ServiceID, DBType, NodeID, and ServiceName.


Managing Exports

List integrations

curl -u admin:password \
"https://api.foundrydb.com/api/v1/metrics-exports?service_id=b2e1c3d4-0000-0000-0000-aabbccddeeff"

Optional query parameters:

ParameterDescription
service_idFilter by service UUID
destination_typeFilter by destination (e.g. datadog)
is_enabledtrue or false
limit / offsetPagination

Get a single integration

curl -u admin:password \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}

Update an integration

All fields are optional. Only the fields you provide are updated.

curl -u admin:password -X PUT \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId} \
-H "Content-Type: application/json" \
-d '{
"export_interval_seconds": 300,
"data_type": "metrics"
}'

Enable and disable

# Pause exports without deleting the integration
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}/disable

# Resume
curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}/enable

Test connectivity

Verifies that the destination is reachable and the credentials are valid without sending real data.

curl -u admin:password -X POST \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}/test

Response:

{ "success": true }

Or, on failure:

{ "success": false, "error": "datadog API key is invalid (HTTP 403)" }

Delete an integration

curl -u admin:password -X DELETE \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId}

Returns 204 No Content on success.


Data Reference

Metric fields

Each metric sample exported to a destination carries the following attributes:

FieldDescription
service_idUUID of the FoundryDB service
service_nameHuman-readable service name
db_typeDatabase engine (postgresql, mysql, mongodb, valkey, kafka)
node_idUUID of the individual VM node
metric_typeMetric name (see Monitoring for the full list)
valueFloating-point metric value
timestampUTC timestamp of the measurement

Metric names are prefixed with foundrydb. when sent to Datadog and OTLP destinations, and with foundrydb_ for Prometheus Remote Write. In CloudWatch the prefix is foundrydb_ with underscores.

Log fields

Each log entry carries:

FieldDescription
service_idUUID of the FoundryDB service
service_nameHuman-readable service name
db_typeDatabase engine
node_idUUID of the VM that produced the log
occurred_atUTC timestamp of the log event
levelSeverity (debug, info, warn, error)
messageLog message text
metadataStructured key/value fields (query duration, error codes, etc.)

Troubleshooting

Integration shows consecutive_failures > 0

Retrieve the integration and inspect last_export_error:

curl -u admin:password \
https://api.foundrydb.com/api/v1/metrics-exports/{integrationId} \
| jq '{last_export_error, consecutive_failures, last_export_at}'

Then run a connectivity test to get an immediate error message from the destination.

Common errors

ErrorCauseFix
API key is invalid (HTTP 403)Wrong or expired credentialRotate the key and update the integration via PUT
connectivity check failed: connection refusedWrong endpoint URL or firewall ruleVerify the endpoint is reachable from the internet
prometheus remote write credentials rejected (HTTP 401)Incorrect username or bearer tokenCheck the credentials in your Grafana Cloud settings
elasticsearch credentials rejected (HTTP 401)Expired API key or wrong basic authRe-generate the API key in Kibana or Elastic Cloud
loki push returned HTTP 400Malformed stream labels or out-of-order timestampsCheck that the labels map does not contain characters disallowed by Loki
export_interval_seconds must be at least 30Interval set below the minimumUse a value of 30 or higher

Destination-specific notes

Datadog: If no data appears in Datadog, confirm the API key has the metrics_write and logs_write scopes. Use the correct site value for your Datadog region.

Prometheus Remote Write: The exporter sets X-Prometheus-Remote-Write-Version: 0.1.0 and uses snappy compression. Ensure your remote write receiver accepts this version.

CloudWatch: Log streams are named foundrydb-YYYY-MM-DD and rotate daily. The IAM policy must cover both monitoring.* and logs.* actions. Newly created log groups may take a few seconds to appear in the console.

Elasticsearch / OpenSearch: Documents are sent using the Bulk API (/_bulk). Verify the index template allows the @timestamp, value, and tags fields. If using OpenSearch with fine-grained access control enabled, the API key must have indices:data/write/bulk permission on the target index patterns.

BetterStack: Only logs data type is useful here. Setting data_type to metrics or both will result in no data being forwarded for the metrics portion since BetterStack does not accept metric payloads.

Loki: Only logs data type is useful here. Metric samples are silently discarded. Loki requires timestamps to be in nanoseconds; the exporter handles this automatically.