Skip to main content

Migrate from DigitalOcean Managed Databases to FoundryDB

· 7 min read
FoundryDB Team
Engineering @ FoundryDB

DigitalOcean Managed Databases is a popular starting point for teams that want hosted PostgreSQL, MySQL, MongoDB, or Valkey without managing infrastructure. It does the basics well: provisioning, automated backups, and TLS. But as your data platform matures, the limitations become clear. Four engines, minimal monitoring, no AI-oriented features, no pipeline templates, and no way to export metrics to your own observability stack. FoundryDB offers seven engines, built-in AI presets, predictive autoscaling, database forking, and seven metrics export destinations, all on European infrastructure.

This guide covers migrating all three of the most common DigitalOcean database engines (PostgreSQL, MySQL, and MongoDB) to FoundryDB. Each section is self-contained, so you can follow just the one relevant to your stack or migrate multiple databases in parallel.

Why Migrate from DigitalOcean?

DigitalOcean's managed database service has served many teams as a reliable first step, but production workloads often demand more:

  • Four engines only. DigitalOcean offers PostgreSQL, MySQL, MongoDB, and Valkey. If you need Kafka for event streaming, OpenSearch for full-text search, or Babelfish for SQL Server wire compatibility, you need a separate provider.
  • Basic monitoring. The DigitalOcean dashboard shows CPU, memory, disk, and connection metrics. There are no query-level analytics, no slow query breakdowns, no custom alerting thresholds, and no way to export metrics to Grafana, Datadog, or Prometheus.
  • No AI features. There are no pgvector presets, no embedding pipeline templates, no AI-assisted query optimization, and no built-in support for RAG architectures.
  • No database forking. Creating a copy of a production database for testing or analytics requires manual dump-and-restore. FoundryDB supports instant forking from any backup or point-in-time snapshot.
  • No predictive autoscaling. DigitalOcean requires manual scaling decisions. You monitor metrics, decide when to resize, and trigger the operation yourself.
  • Limited connection pooling. DigitalOcean offers basic built-in connection pooling for PostgreSQL but lacks the advanced query routing and read/write splitting that ProxySQL (MySQL) or PgBouncer (PostgreSQL) provide.

Prerequisites

For all migrations below, you need:

  • A FoundryDB account at foundrydb.com
  • The fdb CLI installed (brew install foundrydb/tap/fdb or download from the dashboard)
  • Access to your DigitalOcean database credentials (available in the DO dashboard under Databases > Connection Details)

Engine-specific tools:

  • PostgreSQL: pg_dump and pg_restore (included with any PostgreSQL client)
  • MySQL: mysqldump and mysql client
  • MongoDB: mongodump and mongorestore (included with MongoDB Database Tools)

PostgreSQL Migration

Export from DigitalOcean

Get your DigitalOcean PostgreSQL connection details from the dashboard, then run pg_dump:

pg_dump \
"postgresql://doadmin:YOUR_DO_PASSWORD@db-postgresql-sto1-12345-do-user.b.db.ondigitalocean.com:25060/defaultdb?sslmode=require" \
--format=custom \
--no-owner \
--no-privileges \
--verbose \
-f digitalocean-pg.dump

The custom format produces a compressed binary dump that supports parallel restore and selective table import.

Create on FoundryDB

fdb create \
--database-type postgresql \
--version 17 \
--plan-name tier-2 \
--storage-size-gb 50 \
--name my-pg-db \
--zone se-sto1

Wait for the service to reach Running status:

fdb status my-pg-db

Import

Retrieve your FoundryDB credentials and restore:

fdb connect my-pg-db --info

pg_restore \
--host my-pg-db-abc123.db.foundrydb.com \
--port 5432 \
--username app_user \
--dbname defaultdb \
--no-owner \
--no-privileges \
--jobs 4 \
--verbose \
digitalocean-pg.dump

Verify

fdb connect my-pg-db
-- Compare table counts
SELECT count(*) FROM information_schema.tables WHERE table_schema = 'public';

-- Check row counts
SELECT schemaname, relname, n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC LIMIT 20;

-- Verify extensions
SELECT extname, extversion FROM pg_extension;

MySQL Migration

Export from DigitalOcean

Get your DigitalOcean MySQL connection details, then dump:

mysqldump \
-h db-mysql-sto1-12345-do-user.b.db.ondigitalocean.com \
-P 25060 \
-u doadmin \
-pYOUR_DO_PASSWORD \
--ssl-mode=REQUIRED \
--set-gtid-purged=OFF \
--single-transaction \
--routines \
--triggers \
--no-tablespaces \
defaultdb > digitalocean-mysql.sql

The --single-transaction flag ensures a consistent snapshot without locking tables. The --set-gtid-purged=OFF flag avoids GTID conflicts when importing into a fresh instance.

Create on FoundryDB

fdb create \
--database-type mysql \
--version 8.4 \
--plan-name tier-2 \
--storage-size-gb 50 \
--name my-mysql-db \
--zone se-sto1

Wait for Running status:

fdb status my-mysql-db

Import

fdb connect my-mysql-db --info

mysql \
-h my-mysql-db-abc123.db.foundrydb.com \
-u app_user \
-pYOUR_PASSWORD \
--ssl-mode=REQUIRED \
defaultdb < digitalocean-mysql.sql

Verify

fdb connect my-mysql-db
-- Check tables
SHOW TABLES;

-- Verify row counts
SELECT table_name, table_rows
FROM information_schema.tables
WHERE table_schema = 'defaultdb'
ORDER BY table_rows DESC;

-- Check stored routines migrated
SHOW PROCEDURE STATUS WHERE Db = 'defaultdb';
SHOW FUNCTION STATUS WHERE Db = 'defaultdb';

MongoDB Migration

Export from DigitalOcean

Get your DigitalOcean MongoDB connection string, then dump:

mongodump \
--uri="mongodb+srv://doadmin:YOUR_DO_PASSWORD@db-mongodb-sto1-12345.mongo.ondigitalocean.com/admin?authSource=admin&tls=true" \
--db=my_database \
--out=./digitalocean-mongo-dump

This creates a directory with BSON files for each collection. For large databases, add --gzip to compress the output:

mongodump \
--uri="mongodb+srv://doadmin:YOUR_DO_PASSWORD@db-mongodb-sto1-12345.mongo.ondigitalocean.com/admin?authSource=admin&tls=true" \
--db=my_database \
--gzip \
--out=./digitalocean-mongo-dump

Create on FoundryDB

fdb create \
--database-type mongodb \
--version 7.0 \
--plan-name tier-2 \
--storage-size-gb 50 \
--name my-mongo-db \
--zone se-sto1

Wait for Running status:

fdb status my-mongo-db

Import

fdb connect my-mongo-db --info

mongorestore \
--uri="mongodb://app_user:YOUR_PASSWORD@my-mongo-db-abc123.db.foundrydb.com:27017/defaultdb?tls=true&authSource=admin" \
--nsFrom="my_database.*" \
--nsTo="defaultdb.*" \
./digitalocean-mongo-dump/my_database

If you used --gzip during the dump, add --gzip to the restore command as well. The --nsFrom and --nsTo flags remap the database name from your DigitalOcean source to defaultdb on FoundryDB.

Verify

fdb connect my-mongo-db
// List collections
show collections

// Check document counts
db.getCollectionNames().forEach(function(c) {
print(c + ": " + db[c].countDocuments({}));
});

// Verify indexes
db.getCollectionNames().forEach(function(c) {
print("--- " + c + " ---");
printjson(db[c].getIndexes());
});

// Test a query
db.users.find().limit(5).pretty();

Updating Your Application

After importing data into FoundryDB, update your application's connection strings. The new format depends on the engine:

PostgreSQL:

postgresql://app_user:PASSWORD@my-pg-db-abc123.db.foundrydb.com:5432/defaultdb?sslmode=require

MySQL:

mysql://app_user:PASSWORD@my-mysql-db-abc123.db.foundrydb.com:3306/defaultdb?ssl-mode=REQUIRED

MongoDB:

mongodb://app_user:PASSWORD@my-mongo-db-abc123.db.foundrydb.com:27017/defaultdb?tls=true&authSource=admin

For zero-downtime migrations on high-traffic applications, consider running both databases in parallel briefly: point your application at FoundryDB, monitor for errors, and decommission the DigitalOcean instance once you have confirmed stability.

What You Gain After Migrating

Moving from DigitalOcean Managed Databases to FoundryDB gives you access to a significantly richer platform:

  • 7 engines instead of 4. PostgreSQL, MySQL, MongoDB, Valkey, Kafka, OpenSearch, and Babelfish. Manage all of them from one CLI, one dashboard, one bill.
  • AI-ready presets. Provision a PostgreSQL instance with pgvector pre-configured for embedding storage, or create a Kafka + PostgreSQL pipeline for RAG architectures, all from predefined presets.
  • Pipeline templates. Set up common data flows (CDC from PostgreSQL to Kafka, search indexing from MongoDB to OpenSearch) with pre-built templates rather than wiring everything together manually.
  • Predictive autoscaling. FoundryDB tracks CPU, memory, storage, and connection trends over time, then scales proactively before your application is affected.
  • Database forking. Create an instant copy of any database from a backup or point-in-time snapshot. Use it for testing schema changes, running analytics, or debugging production issues without touching the live database.
  • 7 metrics export destinations. Send database metrics to Grafana Cloud, Datadog, New Relic, Prometheus remote write, InfluxDB, CloudWatch, or a custom webhook. DigitalOcean locks you into their built-in dashboard.
  • Query-level analytics. See the slowest queries, most frequent queries, and lock contention patterns directly in the dashboard. No need to enable pg_stat_statements manually or parse slow query logs.
  • EU infrastructure. All FoundryDB services run in European data centers by default. No special configuration or pricing tier required for GDPR compliance.

Whether you are migrating one database or consolidating PostgreSQL, MySQL, and MongoDB onto a single platform, the process follows the same pattern: export, create, import, verify. Each engine's standard tooling works exactly as expected because FoundryDB runs the native database engines with no proxy layers or compatibility shims.