Skip to main content

3 posts tagged with "valkey"

View All Tags

Building a RAG Pipeline with PostgreSQL pgvector and Kafka on FoundryDB

· 7 min read
FoundryDB Team
Engineering @ FoundryDB

Retrieval-Augmented Generation (RAG) has become the standard approach for grounding LLMs in factual, up-to-date data. Instead of fine-tuning a model on your corpus (expensive, slow, stale within weeks), you retrieve relevant context at query time and feed it to the LLM alongside the user's question.

In 2026, RAG is no longer experimental. It powers customer support bots, internal knowledge search, legal document analysis, and code assistants at thousands of companies. The architecture has stabilized around a common pattern: ingest documents, generate embeddings, store vectors, retrieve at query time. What varies is how well you operate the infrastructure underneath.

This post walks through building a production RAG pipeline on FoundryDB using PostgreSQL with pgvector, Kafka for document ingestion, and Valkey for result caching.

From 5 Database Providers to 1: Why We Built a Multi-Engine Platform

· 7 min read
FoundryDB Team
Engineering @ FoundryDB

If you run a modern application stack, you probably use at least three different database engines. PostgreSQL for your application data. MongoDB or another document store for unstructured content. Valkey (or Redis) for caching and session storage. Kafka for event streaming. Maybe MySQL for a legacy service that nobody wants to migrate.

Each engine runs on a different managed provider. Each provider has its own dashboard, its own CLI, its own billing, its own alerting system, its own way of handling backups, its own access control model. You pay five bills, manage five sets of credentials, and context-switch between five different interfaces when something goes wrong at 2 AM.

We built FoundryDB to solve this problem: one platform for all your database engines, with a single API, a single dashboard, and a single bill.

Getting Started with Valkey: Sub-Millisecond Caching for Your Application

· 6 min read
FoundryDB Team
Engineering @ FoundryDB

Most applications hit a performance wall that has nothing to do with their code. The database query that takes 50ms works fine until you are serving 10,000 requests per minute and your connection pool is saturated. Adding an in-memory caching layer drops that response time to under a millisecond and takes the read load off your primary database.

Valkey is the open-source, Redis-compatible in-memory data store that the community rallied behind after Redis changed its license in 2024. It is wire-compatible with Redis, which means your existing Redis clients, libraries, and tooling work without modification. No license concerns, no vendor lock-in, and active development under the Linux Foundation.

This guide walks through provisioning a managed Valkey instance on FoundryDB and implementing common caching patterns in Python, Node.js, and Go.