MCP Pinecone Vector Database Server logo

MCP Pinecone Vector Database Server

by zx8086

This project implements a Model Context Protocol (MCP) server for reading and writing vectorized information to a Pinecone vector database. It supports RAG-processed PDF data and Confluence data.

View on GitHub

Last updated: N/A

MCP Pinecone Vector Database Server

This project implements a Model Context Protocol (MCP) server that allows reading and writing vectorized information to a Pinecone vector database. It's designed to work with both RAG-processed PDF data and Confluence data.

Features

  • Search for similar documents using text queries
  • Add new vectors to the database with custom metadata
  • Process and upload Confluence data in batch
  • Delete vectors by ID
  • Basic database statistics (temporarily disabled)

Prerequisites

  • Bun runtime
  • Pinecone API key
  • OpenAI API key (for generating embeddings)

Installation

  1. Clone this repository

  2. Install dependencies:

    bun install
    
  3. Create a .env file with the following content:

    PINECONE_API_KEY=your-pinecone-api-key
    OPENAI_API_KEY=your-openai-api-key
    PINECONE_HOST=your-pinecone-host
    PINECONE_INDEX_NAME=your-index-name
    DEFAULT_NAMESPACE=your-namespace
    

Usage

Running the MCP Server

Start the server:

bun src/index.ts

The server will start and listen for MCP commands via stdio.

Running the Example Client

Test the server with the example client:

bun examples/client.ts

Processing Confluence Data

The Confluence processing script provides detailed logging and verification:

bun src/scripts/process-confluence.ts <file-path> [collection] [scope]

Parameters:

  • file-path: Path to your Confluence JSON file (required)
  • collection: Document collection name (defaults to "documentation")
  • scope: Document scope (defaults to "documentation")

Example:

bun src/scripts/process-confluence.ts ./data/confluence-export.json "tech-docs" "engineering"

The script will:

  1. Validate input parameters
  2. Process and vectorize the content
  3. Upload vectors in batches
  4. Verify successful upload
  5. Provide detailed logs of the process

Available Tools

The server provides the following tools:

  1. search-vectors - Search for similar documents with parameters:

    • query: string (search query text)
    • topK: number (1-100, default: 5)
    • filter: object (optional filter criteria)
  2. add-vector - Add a single document with parameters:

    • text: string (content to vectorize)
    • metadata: object (vector metadata)
    • id: string (optional custom ID)
  3. process-confluence - Process Confluence JSON data with parameters:

    • filePath: string (path to JSON file)
    • namespace: string (optional, defaults to "capella-document-search")
  4. delete-vectors - Delete vectors with parameters:

    • ids: string[] (list of vector IDs)
    • namespace: string (optional, defaults to "capella-document-search")
  5. get-stats - Get database statistics (temporarily disabled)

Database Configuration

The server requires a Pinecone vector database. Configure the connection details in your .env file:

PINECONE_API_KEY=your-api-key
PINECONE_HOST=your-host
PINECONE_INDEX_NAME=your-index
DEFAULT_NAMESPACE=your-namespace

Metadata Schema

Confluence Documents

ID: confluence-[page-id]-[item-id]
title: [title]
pageId: [page-id]
spaceKey: [space-key]
type: [type]
content: [text-content]
author: [author-name]
source: "confluence"
collection: "documentation"
scope: "documentation"
...

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin feature/my-new-feature
  5. Submit a pull request

License

MIT