🪶 Zero Dependencies
File-based vector storage with HNSWLib. No databases, no cloud services. Just simple, powerful RAG.
Claude + S3 Doc MCP
Connect your LLM (Claude, Cursor, GPT) to S3-stored documentation for intelligent, RAG-powered answers. No complex infrastructure. No heavy dependencies.
Built for developers who value simplicity and performance
🪶 Zero Dependencies
File-based vector storage with HNSWLib. No databases, no cloud services. Just simple, powerful RAG.
🏠 Your Data, Your Rules
Run embeddings locally with Ollama (free, private) or use OpenAI (cloud, high-accuracy). Switch anytime.
💾 Simple as Files
Vector indices are just files on disk. Easy to backup, version, and debug. No black boxes.
🔌 Universal S3
AWS S3, MinIO, Scaleway, Cloudflare R2, DigitalOcean… If it speaks S3, it works.
# 1. Pull Ollama model (local embeddings)ollama pull nomic-embed-text
# 2. Configure S3 credentialscp env.example .env # Add your S3 credentials
# 3. Run the serverdocker run -d --name s3-doc-mcp -p 3000:3000 \ --env-file .env \ -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \ -v $(pwd)/data:/app/data \ yoanbernabeu/s3-doc-mcp:latest✨ That’s it! Your MCP server is now running on http://localhost:3000
# 1. Clone and installgit clone https://github.com/yoanbernabeu/S3-Documentation-MCP-Server.gitcd S3-Documentation-MCP-Servernpm install
# 2. Configurecp env.example .env # Add your S3 credentials
# 3. Build and runnpm run build && npm start🚀 Ready! Server running on http://localhost:3000
docker run -d -p 3000:3000 --env-file .env \ -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \ yoanbernabeu/s3-doc-mcp:latest⚡ Fastest way to get up and running!
For Developers
Dead simple to deploy
Works with Docker or Node.js. No Kubernetes, no cloud complexity. Just docker run and you’re done.
For Privacy-First Teams
100% offline capable
Use Ollama for completely local operation. Your docs never leave your infrastructure.
For Scale
Start small, scale smart
Free Ollama for dev → OpenAI for production. No vendor lock-in, no rewrites.
For Any S3 Storage
Universal compatibility
AWS, MinIO, Cloudflare R2, Scaleway… Works with any S3-compatible storage.
The Challenge
Your customers ask the same questions repeatedly. Support team searches through 500+ markdown files manually. Slow. Inefficient. Frustrating.
The Solution
✨ Instant intelligent answers with sources
The Result
✅ Support queries answered in seconds, not minutes
✅ Exact documentation references with every answer
✅ Team efficiency up 300%
The Challenge
Company knowledge scattered across folders. Outdated search. Employees can’t find critical information quickly.
The Solution
✨ “Find the deployment process for microservice X” → Works instantly!
The Result
✅ Knowledge discovery time reduced by 80%
✅ Onboarding new developers 4x faster
✅ No more “who knows about X?” in Slack
The Challenge
Developers struggle to find specific API endpoints. Examples buried in multiple files. Context-switching kills productivity.
The Solution
✨ “Show me OAuth2 authentication with refresh tokens” → Exact code examples
The Result
✅ API integration time cut in half
✅ Zero “where’s the docs?” support tickets
✅ Developers actually read the documentation
The Challenge
Students need quick answers from course materials. Content spread across 50+ chapters. Search is keyword-only.
The Solution
✨ “Explain photosynthesis from Chapter 3” → Answer with page references
The Result
✅ Students find answers 10x faster
✅ Contextual learning with exact references
✅ 24/7 AI-powered study assistant
search_documentation
Semantic search across all docs. Returns relevant chunks with similarity scores and sources.
refresh_index
Sync your vector index with S3. Incremental or full reindex on demand.
get_full_document
Retrieve complete files by S3 key. Perfect for viewing full context after search.