Vector Databases for AI Apps: Pinecone vs pgvector vs Weaviate
Semantic search, RAG pipelines, and recommendation systems all need vector storage.
Here's how the main options compare and when to use each.
What Are Vector Databases?
Instead of exact matches, vector databases find similar items by measuring distance
between embedding vectors in high-dimensional space.
Query: 'How do I handle auth?'
Embedding: [0.23, -0.41, 0.87, ...] (1536 dimensions)
Nearest neighbors:
- 'Authentication setup guide' (distance: 0.12)
- 'JWT token management' (distance: 0.18)
- 'OAuth2 implementation' (distance: 0.21)
Option 1: pgvector (PostgreSQL Extension)
If you already use PostgreSQL, this is the easiest path:
-- Enable extension
CREATE EXTENSION vector;
-- Store embeddings alongside your data
ALTER TABLE documents ADD COLUMN embedding vector(1536);
-- Create index for fast similarity search
CREATE INDEX documents_embedding_idx
ON documents USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
import OpenAI from 'openai'
import { db } from '@/lib/db' // your Prisma or pg client
const openai = new OpenAI()
async function searchDocuments(query: string, limit = 5) {
// Generate query embedding
const { data } = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: query,
})
const queryEmbedding = data[0].embedding
// Find similar documents
const results = await db.$queryRaw`
SELECT id, title, content,
1 - (embedding <=> ${queryEmbedding}::vector) AS similarity
FROM documents
ORDER BY embedding <=> ${queryEmbedding}::vector
LIMIT ${limit}
`
return results
}
Best for: Apps already on PostgreSQL, smaller datasets (<1M vectors), tight budget.
Option 2: Pinecone (Managed, Scalable)
npm install @pinecone-database/pinecone
import { Pinecone } from '@pinecone-database/pinecone'
const pc = new Pinecone({ apiKey: process.env.PINECONE_API_KEY! })
const index = pc.index('my-index')
// Upsert vectors
await index.upsert([
{
id: 'doc-1',
values: embedding, // float[]
metadata: { title: 'Auth Guide', source: 'docs' },
},
])
// Query similar vectors
const results = await index.query({
vector: queryEmbedding,
topK: 5,
includeMetadata: true,
filter: { source: { $eq: 'docs' } }, // metadata filtering
})
const docs = results.matches.map(m => ({
id: m.id,
score: m.score,
...m.metadata,
}))
Best for: Production AI apps, >1M vectors, need managed scaling.
Building a RAG Pipeline
async function ragAnswer(question: string): Promise<string> {
// 1. Embed the question
const { data } = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: question,
})
// 2. Retrieve relevant chunks
const relevant = await searchDocuments(data[0].embedding, 3)
// 3. Build context
const context = relevant
.map(doc => `[${doc.title}]
${doc.content}`)
.join('
')
// 4. Generate answer with context
const completion = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{
role: 'system',
content: `Answer based on this context:
${context}`,
},
{ role: 'user', content: question },
],
})
return completion.choices[0].message.content!
}
Chunking Strategy
function chunkDocument(text: string, chunkSize = 500, overlap = 50): string[] {
const words = text.split(' ')
const chunks: string[] = []
for (let i = 0; i < words.length; i += chunkSize - overlap) {
chunks.push(words.slice(i, i + chunkSize).join(' '))
if (i + chunkSize >= words.length) break
}
return chunks
}
Comparison
| Feature | pgvector | Pinecone | Weaviate |
|---|---|---|---|
| Setup | Easy (existing PG) | Easy (managed) | Moderate |
| Scale | <1M vectors | Unlimited | Unlimited |
| Cost | Free | Free tier + paid | Self-hosted or managed |
| Metadata filter | SQL | Built-in | GraphQL |
| Hybrid search | Limited | Yes | Yes |
Building AI features into your SaaS? The AI SaaS Starter Kit includes Claude/OpenAI API routes pre-configured with streaming. Add your vector DB of choice. $99 one-time.




