Skip to main content

5 posts tagged with "ai"

View All Tags

AI Agents Need Databases Too: How FoundryDB Serves the Agent Era

· 7 min read
FoundryDB Team
Engineering @ FoundryDB

Something fundamental shifted in how databases get created. In 2024, most database provisioning was triggered by a human clicking a button in a dashboard or running a CLI command. By early 2026, Neon reported that 80% of new databases on their platform were created by AI agents, not humans. The database is becoming infrastructure that agents provision, configure, and tear down as part of their workflows.

This changes what a managed database platform needs to provide. Agents do not use dashboards. They do not read documentation the way engineers do. They need fast, programmatic interfaces with predictable behavior. They need databases that spin up in seconds, clean themselves up when no longer needed, and integrate natively with agent frameworks.

We built FoundryDB's agent infrastructure to meet these requirements.

Query Your Database in Plain English: FoundryDB's AI Query Console

· 7 min read
FoundryDB Team
Engineering @ FoundryDB

Most database dashboards give you a SQL editor and wish you luck. That works fine when you remember the exact name of the pg_stat_user_tables view or the difference between information_schema.TABLES in MySQL versus PostgreSQL. For everyone else, there is the AI Query Console.

FoundryDB's AI Query Console lets you type a question in plain English, translates it to a database-native SQL query using Claude, and executes it against your running service. The entire flow is read-only by design. You cannot accidentally drop a table by asking a question.

Building a RAG Pipeline with PostgreSQL pgvector and Kafka on FoundryDB

· 7 min read
FoundryDB Team
Engineering @ FoundryDB

Retrieval-Augmented Generation (RAG) has become the standard approach for grounding LLMs in factual, up-to-date data. Instead of fine-tuning a model on your corpus (expensive, slow, stale within weeks), you retrieve relevant context at query time and feed it to the LLM alongside the user's question.

In 2026, RAG is no longer experimental. It powers customer support bots, internal knowledge search, legal document analysis, and code assistants at thousands of companies. The architecture has stabilized around a common pattern: ingest documents, generate embeddings, store vectors, retrieve at query time. What varies is how well you operate the infrastructure underneath.

This post walks through building a production RAG pipeline on FoundryDB using PostgreSQL with pgvector, Kafka for document ingestion, and Valkey for result caching.

Automatic Embedding Generation: Build RAG Without the Plumbing

· 8 min read
FoundryDB Team
Engineering @ FoundryDB

Every RAG system needs the same boring middle layer: watch a table for changes, call an embedding API, write vectors back, handle retries, manage batches, build indexes, schedule cron jobs, and pray nothing drifts out of sync at 3 AM. FoundryDB's managed embedding pipelines eliminate that entire layer. You configure a pipeline, and your PostgreSQL data gets auto-vectorized with an HNSW index, ready for similarity search.

No ETL scripts. No cron jobs. No model orchestration code.

MCP Server: Connecting AI Coding Assistants to Your Databases

· 6 min read
FoundryDB Team
Engineering @ FoundryDB

Your AI coding assistant can write SQL, generate migrations, and debug queries. But when you need a database to run that code against, you leave the conversation, open a dashboard, click through a provisioning wizard, copy credentials back into your editor, and resume. That context switch breaks flow.

FoundryDB's MCP server removes it. Your AI assistant provisions databases, retrieves connection strings, checks metrics, and triggers backups without you ever leaving the conversation.