Compare commits
10 Commits
develop
...
fe21c6b790
| Author | SHA1 | Date | |
|---|---|---|---|
| fe21c6b790 | |||
| 58d09ebc35 | |||
|
|
28169f48bc | ||
|
|
a7d9c15d27 | ||
|
|
e1a0d82864 | ||
|
|
561d858a7f | ||
|
|
f8e5a8d742 | ||
|
|
039b53dd02 | ||
| 2969ad73ea | |||
| a34633eaee |
136
README.md
136
README.md
@@ -1,2 +1,136 @@
|
|||||||
# Cariddi
|
# Cariddi – MCP Client and Server
|
||||||
|
|
||||||
|
Complete MCP (Model Context Protocol) stack for CTF and reverse engineering workflows:
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
## Cariddi Server
|
||||||
|
|
||||||
|
FastMCP server exposing filesystem and execution tools, with correct handling of escape characters when writing Python files.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Cariddi
|
||||||
|
python3 -m venv venv
|
||||||
|
source venv/bin/activate # Windows: venv\Scripts\activate
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run
|
||||||
|
|
||||||
|
```bash
|
||||||
|
source venv/bin/activate
|
||||||
|
python main.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Server listens on `http://0.0.0.0:8000/mcp` (streamable HTTP).
|
||||||
|
|
||||||
|
### Environment
|
||||||
|
|
||||||
|
- `FASTMCP_HOST` / `MCP_HOST`: host (default `0.0.0.0`)
|
||||||
|
- `FASTMCP_PORT` / `MCP_PORT`: port (default `8000`)
|
||||||
|
|
||||||
|
### MCP Inspector
|
||||||
|
|
||||||
|
With the server running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx @modelcontextprotocol/inspector --url http://localhost:8000/mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
Use transport **Streamable HTTP** and URL `http://localhost:8000/mcp`.
|
||||||
|
|
||||||
|
Or run inspector and server together:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx @modelcontextprotocol/inspector python main.py
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
With Compose:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cariddi Client
|
||||||
|
|
||||||
|
Python MCP client that talks to Ollama and connects to MCP servers. Configured as a **Crypto Solver Agent** for CTF crypto challenges.
|
||||||
|
|
||||||
|
### Requirements
|
||||||
|
|
||||||
|
- Python 3.7+
|
||||||
|
- [Ollama](https://ollama.ai/) installed and running
|
||||||
|
|
||||||
|
### Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd CariddiClient
|
||||||
|
pip install -r requirements.txt
|
||||||
|
ollama serve
|
||||||
|
ollama pull ministral-3 # or llama3.2
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List models
|
||||||
|
python mcpClient.py --list-models
|
||||||
|
|
||||||
|
# Single prompt
|
||||||
|
python mcpClient.py --prompt "What is the capital of France?"
|
||||||
|
|
||||||
|
# Interactive
|
||||||
|
python mcpClient.py --interactive
|
||||||
|
|
||||||
|
# Custom Ollama and model
|
||||||
|
python mcpClient.py --base-url http://localhost:11434 --model ministral-3 --prompt "Hello!"
|
||||||
|
|
||||||
|
# Connect to MCP server (streamable HTTP)
|
||||||
|
python mcpClient.py --mcp-server "http://localhost:8000/mcp" --prompt "Use tools to help me"
|
||||||
|
python mcpClient.py --mcp-server "http://localhost:8000/mcp" --interactive
|
||||||
|
|
||||||
|
# With auth headers
|
||||||
|
python mcpClient.py --mcp-server "http://localhost:8000/mcp" --mcp-headers '{"Authorization": "Bearer token"}' --interactive
|
||||||
|
```
|
||||||
|
|
||||||
|
### Defaults
|
||||||
|
|
||||||
|
- Ollama: `http://localhost:11434`
|
||||||
|
- Model: `ministral-3`
|
||||||
|
- MCP Server: `http://localhost:8000/mcp`
|
||||||
|
|
||||||
|
### Crypto Solver Agent
|
||||||
|
|
||||||
|
The client is tuned to:
|
||||||
|
|
||||||
|
1. **Explore**: List files (e.g. in `/tmp`) to find challenge files.
|
||||||
|
2. **Analyze**: Recognize crypto (RSA, AES, DES, XOR, encodings) and typical weaknesses.
|
||||||
|
3. **Execute**: Write and run Python scripts to recover keys or plaintext.
|
||||||
|
4. **Validate**: Look for flags in the form `flag{...}`.
|
||||||
|
|
||||||
|
Covered areas: RSA (small modulus, low exponent, Wiener, Hastad, common modulus), symmetric (AES/DES, ECB/CBC, IV/key reuse), classical ciphers, Base64/Hex/endianness.
|
||||||
|
|
||||||
|
|
||||||
|
## CTF Challenges
|
||||||
|
|
||||||
|
- **cryptoEasy**: Diffie–Hellman + AES encryption challenge (in `challs/cryptoEasy/`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Candidate MCP Servers
|
||||||
|
|
||||||
|
Other MCP servers you can combine with Cariddi or use in similar workflows (reverse engineering, binary analysis, malware analysis, shell execution):
|
||||||
|
|
||||||
|
| Project | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| [radare2-mcp](https://github.com/radareorg/radare2-mcp) | MCP stdio server for radare2 – binary analysis with r2, r2pipe, optional raw r2 commands. |
|
||||||
|
| [headless-ida-mcp-server](https://github.com/cnitlrt/headless-ida-mcp-server) | Headless IDA Pro MCP server – analyze binaries via IDA’s headless mode (idat). |
|
||||||
|
| [MalwareAnalyzerMCP](https://github.com/abdessamad-elamrani/malwareanalyzermcp) | MCP server for malware analysis – `file`, `strings`, `hexdump`, `objdump`, `xxd`, shell commands with timeouts. |
|
||||||
|
| [GhidrAssistMCP](https://github.com/jtang613/ghidrassistmcp) | Ghidra MCP extension – 34 tools, resources, prompts for reverse engineering (decompile, xrefs, structs, etc.). |
|
||||||
|
| [shell-exec-mcp](https://github.com/domdomegg/shell-exec-mcp) | MCP server for shell command execution – run bash commands with optional timeout and background jobs. |
|
||||||
|
| [ida-pro-mcp](https://github.com/mrexodia/ida-pro-mcp) | IDA Pro MCP bridge – AI-assisted reversing in IDA (decompile, disasm, xrefs, types, debugger extension). |
|
||||||
|
|
||||||
|
---
|
||||||
63
challs/cryptoEasy/challenge.py
Normal file
63
challs/cryptoEasy/challenge.py
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
from Crypto.Cipher import AES
|
||||||
|
from Crypto.Util.Padding import pad, unpad
|
||||||
|
import os
|
||||||
|
|
||||||
|
def factorial(x):
|
||||||
|
prod = 1
|
||||||
|
for i in range (1,x+1):
|
||||||
|
prod = prod * i
|
||||||
|
return prod
|
||||||
|
|
||||||
|
a = 3
|
||||||
|
b = 8
|
||||||
|
p = 159043501668831001976189741401919059600158436023339250375247150721773143712698491956718970846959154624950002991005143073475212844582380943612898306056733646147380223572684106846684017427300415826606628398091756029258247836173822579694289151452726958472153473864316673552015163436466970719494284188245853583109
|
||||||
|
g = factorial(p-1)
|
||||||
|
|
||||||
|
flag = "flag{...}"
|
||||||
|
|
||||||
|
def getDHkey():
|
||||||
|
A = pow(g,a,p)
|
||||||
|
B = pow(g,b,p)
|
||||||
|
K = pow(B,a,p)
|
||||||
|
|
||||||
|
return K
|
||||||
|
|
||||||
|
def handle():
|
||||||
|
keyExchanged = str(getDHkey())
|
||||||
|
encryptedFlag = encrypt(flag.encode("utf-8"),keyExchanged)
|
||||||
|
print("Il messaggio crittografato è: {0}".format(encryptedFlag.hex()))
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
def fakePadding(k):
|
||||||
|
if (len(k) > 16):
|
||||||
|
raise ValueError('La tua chiave è più lunga di 16 byte')
|
||||||
|
else:
|
||||||
|
if len(k) == 16:
|
||||||
|
return k
|
||||||
|
else:
|
||||||
|
missingBytes = 16 - len(k)
|
||||||
|
for i in range(missingBytes):
|
||||||
|
k = ''.join([k,"0"])
|
||||||
|
return k
|
||||||
|
|
||||||
|
def encrypt(f,k):
|
||||||
|
key = bytes(fakePadding(k),"utf-8")
|
||||||
|
|
||||||
|
cipher = AES.new(key, AES.MODE_ECB)
|
||||||
|
encryptedFlag = cipher.encrypt(pad(f, AES.block_size))
|
||||||
|
return encryptedFlag
|
||||||
|
|
||||||
|
def decrypt(f, k):
|
||||||
|
|
||||||
|
key = fakePadding(str(k))
|
||||||
|
|
||||||
|
chiave = bytes(key, "utf-8")
|
||||||
|
cipher = AES.new(chiave, AES.MODE_ECB)
|
||||||
|
decryptedFlag = cipher.decrypt(f)
|
||||||
|
return decryptedFlag
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
handle()
|
||||||
3
challs/cryptoEasy/cryptoeasy.txt
Normal file
3
challs/cryptoEasy/cryptoeasy.txt
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
Diffie Hellman è così costoso computazionalmente se si usano valori particolari, come venirne fuori?
|
||||||
|
|
||||||
|
Ciphertext: b5609cfbad99f1b20ec3a93b97f379d8426f934ffcb77d83ea9161fefa78d243
|
||||||
698
mcpClient/mcpClient.py
Normal file
698
mcpClient/mcpClient.py
Normal file
@@ -0,0 +1,698 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
MCP client that uses Ollama for inference and LangChain create_agent with
|
||||||
|
runtime-registered MCP tools (see https://docs.langchain.com/oss/python/langchain/agents#runtime-tool-registration).
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import asyncio
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional, Dict, Any, List, Callable, Awaitable
|
||||||
|
|
||||||
|
import requests
|
||||||
|
from fastmcp import Client as FastMcpClient
|
||||||
|
from ollama import ResponseError as OllamaResponseError
|
||||||
|
from pydantic import BaseModel, ConfigDict, Field, create_model
|
||||||
|
|
||||||
|
# LangChain agent and middleware
|
||||||
|
try:
|
||||||
|
from langchain.agents import create_agent
|
||||||
|
from langchain.agents.middleware import AgentMiddleware, ModelRequest, ModelResponse, ToolCallRequest
|
||||||
|
from langchain_core.tools import StructuredTool, tool
|
||||||
|
from langchain_ollama import ChatOllama
|
||||||
|
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage, ToolMessage
|
||||||
|
except ImportError as e:
|
||||||
|
print(f"Missing dependency: {e}. Install with: pip install langchain langgraph langchain-community langchain-ollama", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
@tool
|
||||||
|
def getTime() -> str:
|
||||||
|
"""Get the current time in ISO format."""
|
||||||
|
from datetime import datetime
|
||||||
|
return datetime.now().isoformat()
|
||||||
|
|
||||||
|
|
||||||
|
@tool
|
||||||
|
def countWords(text: str) -> int:
|
||||||
|
"""Count the number of words in a text."""
|
||||||
|
return len(text.split())
|
||||||
|
|
||||||
|
|
||||||
|
def loadMcpConfig(configPath: Optional[str] = None) -> Dict[str, str]:
|
||||||
|
"""Load MCP server URLs from mcp.json. Returns dict serverName -> url."""
|
||||||
|
if configPath is None:
|
||||||
|
# Default: mcpServer/mcp.json relative to project root or cwd
|
||||||
|
base = Path(__file__).resolve().parent.parent
|
||||||
|
configPath = str(base / "mcpServer" / "mcp.json")
|
||||||
|
path = Path(configPath)
|
||||||
|
if not path.exists():
|
||||||
|
return {}
|
||||||
|
try:
|
||||||
|
with open(path, "r", encoding="utf-8") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
print(f"Warning: Could not load MCP config from {path}: {e}", file=sys.stderr)
|
||||||
|
return {}
|
||||||
|
servers = data.get("mcpServers") or data.get("mcp_servers") or {}
|
||||||
|
return {name: info.get("url", "") for name, info in servers.items() if isinstance(info, dict) and info.get("url")}
|
||||||
|
|
||||||
|
|
||||||
|
class GenericToolArgs(BaseModel):
|
||||||
|
"""Accept any keyword arguments for MCP tool calls (fallback when schema is missing)."""
|
||||||
|
model_config = ConfigDict(extra="allow")
|
||||||
|
|
||||||
|
|
||||||
|
def _jsonSchemaTypeToPython(jsonType: str) -> type:
|
||||||
|
"""Map JSON schema type to Python type."""
|
||||||
|
return {"string": str, "integer": int, "number": float, "boolean": bool, "array": list, "object": dict}.get(jsonType, str)
|
||||||
|
|
||||||
|
|
||||||
|
def _defaultForJsonType(jsonType: str) -> Any:
|
||||||
|
"""Sensible default for optional MCP params so server does not receive null."""
|
||||||
|
return {"string": "", "integer": 0, "number": 0.0, "boolean": False, "array": [], "object": {}}.get(jsonType, "")
|
||||||
|
|
||||||
|
|
||||||
|
def _defaultsFromInputSchema(inputSchema: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
"""Build default values for all params so we never send null to the MCP server (LLM may omit required params)."""
|
||||||
|
if not inputSchema:
|
||||||
|
return {}
|
||||||
|
properties = inputSchema.get("properties") or {}
|
||||||
|
out: Dict[str, Any] = {}
|
||||||
|
for name, spec in properties.items():
|
||||||
|
if not isinstance(spec, dict):
|
||||||
|
continue
|
||||||
|
if "default" in spec:
|
||||||
|
out[name] = spec["default"]
|
||||||
|
else:
|
||||||
|
out[name] = _defaultForJsonType(spec.get("type", "string"))
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def buildArgsSchemaFromMcpInputSchema(toolName: str, inputSchema: Dict[str, Any]) -> type[BaseModel]:
|
||||||
|
"""Build a Pydantic model from MCP tool inputSchema so the LLM gets exact parameter names (path, content, etc.)."""
|
||||||
|
if not inputSchema:
|
||||||
|
return GenericToolArgs
|
||||||
|
properties = inputSchema.get("properties") or {}
|
||||||
|
required = set(inputSchema.get("required") or [])
|
||||||
|
if not properties:
|
||||||
|
return GenericToolArgs
|
||||||
|
fields: Dict[str, Any] = {}
|
||||||
|
for name, spec in properties.items():
|
||||||
|
if not isinstance(spec, dict):
|
||||||
|
continue
|
||||||
|
desc = spec.get("description", "")
|
||||||
|
jsonType = spec.get("type", "string")
|
||||||
|
pyType = _jsonSchemaTypeToPython(jsonType)
|
||||||
|
if name in required:
|
||||||
|
fields[name] = (pyType, Field(..., description=desc))
|
||||||
|
else:
|
||||||
|
fields[name] = (Optional[pyType], Field(None, description=desc))
|
||||||
|
if not fields:
|
||||||
|
return GenericToolArgs
|
||||||
|
return create_model(f"McpArgs_{toolName}", **fields)
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaClient:
|
||||||
|
"""Client for interacting with Ollama API."""
|
||||||
|
|
||||||
|
def __init__(self, baseUrl: str = "http://localhost:11434", model: str = "gpt-oss:20b"):
|
||||||
|
self.baseUrl = baseUrl
|
||||||
|
self.model = model
|
||||||
|
|
||||||
|
def listModels(self) -> List[str]:
|
||||||
|
"""List available Ollama models."""
|
||||||
|
try:
|
||||||
|
response = requests.get(f"{self.baseUrl}/api/tags", timeout=10)
|
||||||
|
response.raise_for_status()
|
||||||
|
data = response.json()
|
||||||
|
return [model["name"] for model in data.get("models", [])]
|
||||||
|
except requests.RequestException as e:
|
||||||
|
print(f"Error listing models: {e}", file=sys.stderr)
|
||||||
|
return []
|
||||||
|
|
||||||
|
def chat(self, messages: List[Dict[str, str]], options: Optional[Dict[str, Any]] = None) -> str:
|
||||||
|
"""Send chat messages to Ollama and get response."""
|
||||||
|
payload = {
|
||||||
|
"model": self.model,
|
||||||
|
"messages": messages,
|
||||||
|
"stream": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
if options:
|
||||||
|
payload["options"] = options
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = requests.post(
|
||||||
|
f"{self.baseUrl}/api/chat",
|
||||||
|
json=payload,
|
||||||
|
timeout=60*60*60
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
data = response.json()
|
||||||
|
return data.get("message", {}).get("content", "")
|
||||||
|
except requests.RequestException as e:
|
||||||
|
print(f"Error in chat request: {e}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def generate(self, prompt: str, options: Optional[Dict[str, Any]] = None) -> str:
|
||||||
|
"""Generate text from a prompt using Ollama."""
|
||||||
|
payload = {
|
||||||
|
"model": self.model,
|
||||||
|
"prompt": prompt,
|
||||||
|
"stream": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
if options:
|
||||||
|
payload["options"] = options
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = requests.post(
|
||||||
|
f"{self.baseUrl}/api/generate",
|
||||||
|
json=payload,
|
||||||
|
timeout=120
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
data = response.json()
|
||||||
|
return data.get("response", "")
|
||||||
|
except requests.RequestException as e:
|
||||||
|
print(f"Error in generate request: {e}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def checkHealth(self) -> bool:
|
||||||
|
"""Check if Ollama server is accessible."""
|
||||||
|
try:
|
||||||
|
response = requests.get(f"{self.baseUrl}/api/tags", timeout=5)
|
||||||
|
return response.status_code == 200
|
||||||
|
except requests.RequestException:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class McpServerWrapper:
|
||||||
|
"""Wrapper around FastMCP Client for easier use."""
|
||||||
|
|
||||||
|
def __init__(self, httpUrl: str, headers: Optional[Dict[str, str]] = None):
|
||||||
|
self.httpUrl = httpUrl.rstrip("/")
|
||||||
|
self.headers = headers or {}
|
||||||
|
self.client: Optional[FastMcpClient] = None
|
||||||
|
self.serverTools: List[Dict[str, Any]] = []
|
||||||
|
|
||||||
|
async def connect(self) -> bool:
|
||||||
|
"""Connect and initialize with MCP server via HTTP."""
|
||||||
|
try:
|
||||||
|
# FastMcpClient doesn't support headers parameter directly
|
||||||
|
# Headers would need to be passed via custom transport or auth
|
||||||
|
# For now, we initialize without headers
|
||||||
|
self.client = FastMcpClient(self.httpUrl)
|
||||||
|
await self.client.__aenter__()
|
||||||
|
# Load tools after connection
|
||||||
|
tools = await self.listServerTools()
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error connecting to MCP server: {e}", file=sys.stderr)
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def disconnect(self):
|
||||||
|
"""Disconnect from MCP server."""
|
||||||
|
if self.client:
|
||||||
|
await self.client.__aexit__(None, None, None)
|
||||||
|
self.client = None
|
||||||
|
|
||||||
|
async def listServerTools(self) -> List[Dict[str, Any]]:
|
||||||
|
"""List tools available from MCP server."""
|
||||||
|
if not self.client:
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
tools = await self.client.list_tools()
|
||||||
|
self.serverTools = tools
|
||||||
|
return tools
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error listing tools: {e}", file=sys.stderr)
|
||||||
|
return []
|
||||||
|
|
||||||
|
async def callServerTool(self, name: str, arguments: Dict[str, Any]) -> Any:
|
||||||
|
"""Call a tool on the MCP server."""
|
||||||
|
if not self.client:
|
||||||
|
raise RuntimeError("Not connected to MCP server")
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = await self.client.call_tool(name, arguments)
|
||||||
|
# FastMCP call_tool returns a result object with .content
|
||||||
|
if hasattr(result, 'content'):
|
||||||
|
content = result.content
|
||||||
|
# If content is a list, return it as is (will be serialized later)
|
||||||
|
if isinstance(content, list):
|
||||||
|
return content
|
||||||
|
return content
|
||||||
|
elif isinstance(result, list):
|
||||||
|
# Handle list of results
|
||||||
|
if len(result) > 0:
|
||||||
|
# Extract content from each item if it exists
|
||||||
|
contents = []
|
||||||
|
for item in result:
|
||||||
|
if hasattr(item, 'content'):
|
||||||
|
contents.append(item.content)
|
||||||
|
else:
|
||||||
|
contents.append(item)
|
||||||
|
return contents if len(contents) > 1 else contents[0] if contents else None
|
||||||
|
return result
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
raise RuntimeError(f"Tool call failed: {str(e)}")
|
||||||
|
|
||||||
|
async def listServerResources(self) -> List[Dict[str, Any]]:
|
||||||
|
"""List resources available from MCP server."""
|
||||||
|
if not self.client:
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
resources = await self.client.list_resources()
|
||||||
|
return resources
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error listing resources: {e}", file=sys.stderr)
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
def _serializeToolResult(result: Any) -> Any:
|
||||||
|
"""Serialize tool result to JSON-serializable format."""
|
||||||
|
if hasattr(result, "text"):
|
||||||
|
return result.text
|
||||||
|
if hasattr(result, "content"):
|
||||||
|
content = result.content
|
||||||
|
if hasattr(content, "text"):
|
||||||
|
return content.text
|
||||||
|
return content
|
||||||
|
if isinstance(result, list):
|
||||||
|
return [_serializeToolResult(item) for item in result]
|
||||||
|
if isinstance(result, dict):
|
||||||
|
return {k: _serializeToolResult(v) for k, v in result.items()}
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def _makeMcpToolCoroutine(
|
||||||
|
toolName: str,
|
||||||
|
server: McpServerWrapper,
|
||||||
|
defaultArgs: Dict[str, Any],
|
||||||
|
toolTimeout: Optional[float] = None,
|
||||||
|
) -> Callable[..., Awaitable[Any]]:
|
||||||
|
async def _invoke(**kwargs: Any) -> Any:
|
||||||
|
merged = {**defaultArgs, **kwargs}
|
||||||
|
# Strip None values - MCP server Zod schemas often reject null for optional params (expect number | undefined, not number | null)
|
||||||
|
merged = {k: v for k, v in merged.items() if v is not None}
|
||||||
|
try:
|
||||||
|
if toolTimeout is not None and toolTimeout > 0:
|
||||||
|
result = await asyncio.wait_for(
|
||||||
|
server.callServerTool(toolName, merged),
|
||||||
|
timeout=toolTimeout,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
result = await server.callServerTool(toolName, merged)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
return (
|
||||||
|
f"[Tool timeout] '{toolName}' exceeded {toolTimeout}s. "
|
||||||
|
"The operation may have hung (e.g. command not found, subprocess blocking). "
|
||||||
|
"Try an alternative (e.g. 'python' instead of 'python3') or increase --tool-timeout."
|
||||||
|
)
|
||||||
|
return _serializeToolResult(result)
|
||||||
|
return _invoke
|
||||||
|
|
||||||
|
|
||||||
|
async def buildMcpLangChainTools(
|
||||||
|
mcpServers: List[McpServerWrapper],
|
||||||
|
toolTimeout: Optional[float] = None,
|
||||||
|
) -> List[StructuredTool]:
|
||||||
|
"""Build LangChain StructuredTools from connected MCP servers (runtime tool registration)."""
|
||||||
|
tools: List[StructuredTool] = []
|
||||||
|
for server in mcpServers:
|
||||||
|
rawTools = await server.listServerTools()
|
||||||
|
for raw in rawTools:
|
||||||
|
name = getattr(raw, "name", None) or (raw.get("name") if isinstance(raw, dict) else None)
|
||||||
|
description = getattr(raw, "description", None) or (raw.get("description", "") if isinstance(raw, dict) else "")
|
||||||
|
inputSchema = getattr(raw, "inputSchema", None) or getattr(raw, "input_schema", None) or (raw.get("inputSchema") or raw.get("input_schema") if isinstance(raw, dict) else None)
|
||||||
|
if not name:
|
||||||
|
continue
|
||||||
|
description = description or f"MCP tool: {name}"
|
||||||
|
schemaDict = inputSchema or {}
|
||||||
|
argsSchema = buildArgsSchemaFromMcpInputSchema(name, schemaDict)
|
||||||
|
defaultArgs = _defaultsFromInputSchema(schemaDict)
|
||||||
|
tool = StructuredTool.from_function(
|
||||||
|
name=name,
|
||||||
|
description=description,
|
||||||
|
args_schema=argsSchema,
|
||||||
|
coroutine=_makeMcpToolCoroutine(name, server, defaultArgs, toolTimeout),
|
||||||
|
)
|
||||||
|
tools.append(tool)
|
||||||
|
return tools
|
||||||
|
|
||||||
|
|
||||||
|
class LogToolCallsMiddleware(AgentMiddleware):
|
||||||
|
"""Middleware that logs every tool call (name and args)."""
|
||||||
|
|
||||||
|
def wrap_tool_call(self, request: ToolCallRequest, handler: Callable):
|
||||||
|
_logToolCallRequest(request)
|
||||||
|
return handler(request)
|
||||||
|
|
||||||
|
async def awrap_tool_call(self, request: ToolCallRequest, handler: Callable):
|
||||||
|
_logToolCallRequest(request)
|
||||||
|
return await handler(request)
|
||||||
|
|
||||||
|
|
||||||
|
def _extractTextFromAIMessageContent(content: Any) -> str:
|
||||||
|
"""Extract plain text from AIMessage.content (str or list of content blocks)."""
|
||||||
|
if content is None:
|
||||||
|
return ""
|
||||||
|
if isinstance(content, str):
|
||||||
|
return content.strip()
|
||||||
|
if isinstance(content, list):
|
||||||
|
parts: List[str] = []
|
||||||
|
for block in content:
|
||||||
|
if isinstance(block, dict) and "text" in block:
|
||||||
|
parts.append(str(block["text"]))
|
||||||
|
elif isinstance(block, str):
|
||||||
|
parts.append(block)
|
||||||
|
return "\n".join(parts).strip() if parts else ""
|
||||||
|
return str(content).strip()
|
||||||
|
|
||||||
|
|
||||||
|
def _extractFinalResponse(result: Dict[str, Any]) -> str:
|
||||||
|
"""Extract the final assistant text from agent result; handle recursion limit / no final message."""
|
||||||
|
messages = result.get("messages") or []
|
||||||
|
for msg in reversed(messages):
|
||||||
|
if isinstance(msg, AIMessage) and hasattr(msg, "content"):
|
||||||
|
text = _extractTextFromAIMessageContent(msg.content)
|
||||||
|
if text:
|
||||||
|
return text
|
||||||
|
return (
|
||||||
|
"Agent stopped without a final text response (e.g. hit step limit after tool calls). "
|
||||||
|
"Try again or increase --recursion-limit."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _logToolCallRequest(request: ToolCallRequest) -> None:
|
||||||
|
tc = request.tool_call
|
||||||
|
name = tc.get("name") if isinstance(tc, dict) else getattr(tc, "name", None)
|
||||||
|
args = tc.get("args", tc.get("arguments", {})) if isinstance(tc, dict) else getattr(tc, "args", getattr(tc, "arguments", {}))
|
||||||
|
argsStr = json.dumps(args, ensure_ascii=False)
|
||||||
|
if len(argsStr) > 500:
|
||||||
|
argsStr = argsStr[:497] + "..."
|
||||||
|
print(f"[Tool Call] {name} args={argsStr}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
class McpToolsMiddleware(AgentMiddleware):
|
||||||
|
"""Middleware that adds MCP tools at runtime and handles their execution (runtime tool registration)."""
|
||||||
|
|
||||||
|
def __init__(self, mcpTools: List[StructuredTool], staticToolNames: Optional[List[str]] = None):
|
||||||
|
self.mcpTools = mcpTools
|
||||||
|
self.mcpToolsByName = {t.name: t for t in mcpTools}
|
||||||
|
staticNames = set(staticToolNames or [])
|
||||||
|
self.validToolNames = staticNames | set(self.mcpToolsByName.keys())
|
||||||
|
|
||||||
|
def wrap_model_call(self, request: ModelRequest, handler: Callable) -> ModelResponse:
|
||||||
|
updated = request.override(tools=[*request.tools, *self.mcpTools])
|
||||||
|
return handler(updated)
|
||||||
|
|
||||||
|
async def awrap_model_call(self, request: ModelRequest, handler: Callable):
|
||||||
|
updated = request.override(tools=[*request.tools, *self.mcpTools])
|
||||||
|
return await handler(updated)
|
||||||
|
|
||||||
|
def _toolExists(self, name: Optional[str]) -> bool:
|
||||||
|
return bool(name and name in self.validToolNames)
|
||||||
|
|
||||||
|
def _unknownToolErrorToolMessage(self, request: ToolCallRequest, name: str) -> ToolMessage:
|
||||||
|
available = ", ".join(sorted(self.validToolNames))
|
||||||
|
content = (
|
||||||
|
f"[Error] Tool '{name}' does not exist. "
|
||||||
|
f"Only the following tools are available: {available}. "
|
||||||
|
"Do not call tools that are not in this list."
|
||||||
|
)
|
||||||
|
tc = request.tool_call
|
||||||
|
toolCallId = tc.get("id") if isinstance(tc, dict) else getattr(tc, "id", None)
|
||||||
|
return ToolMessage(
|
||||||
|
content=content,
|
||||||
|
tool_call_id=toolCallId or "unknown",
|
||||||
|
name=name or "unknown",
|
||||||
|
status="error",
|
||||||
|
)
|
||||||
|
|
||||||
|
def wrap_tool_call(self, request: ToolCallRequest, handler: Callable):
|
||||||
|
name = request.tool_call.get("name") if isinstance(request.tool_call, dict) else getattr(request.tool_call, "name", None)
|
||||||
|
if not self._toolExists(name):
|
||||||
|
return self._unknownToolErrorToolMessage(request, name or "<unknown>")
|
||||||
|
if name and name in self.mcpToolsByName:
|
||||||
|
return handler(request.override(tool=self.mcpToolsByName[name]))
|
||||||
|
return handler(request)
|
||||||
|
|
||||||
|
async def awrap_tool_call(self, request: ToolCallRequest, handler: Callable):
|
||||||
|
name = request.tool_call.get("name") if isinstance(request.tool_call, dict) else getattr(request.tool_call, "name", None)
|
||||||
|
if not self._toolExists(name):
|
||||||
|
return self._unknownToolErrorToolMessage(request, name or "<unknown>")
|
||||||
|
if name and name in self.mcpToolsByName:
|
||||||
|
return await handler(request.override(tool=self.mcpToolsByName[name]))
|
||||||
|
return await handler(request)
|
||||||
|
|
||||||
|
''' TODO Use this if you want sequential thinking
|
||||||
|
SYSTEM_PROMPT = """
|
||||||
|
ROLE:
|
||||||
|
Sei un esperto Analista di Cybersecurity specializzato in CTF (Capture The Flag) e analisi di vulnerabilità. Operi in un ambiente Linux sandbox dove la tua unica area di lavoro è la directory /tmp.
|
||||||
|
|
||||||
|
WORKSPACE CONSTRAINT: IL "SINGLE SOURCE OF TRUTH"
|
||||||
|
- Obbligo Assoluto: Tutte le operazioni di lettura, scrittura, download e analisi devono avvenire esclusivamente all'interno di /tmp.
|
||||||
|
- Percorsi: Ogni file deve essere referenziato con il percorso assoluto (es. /tmp/binary.bin). Non usare mai directory come ~/, /home o altre al di fuori di /tmp.
|
||||||
|
- Condivisione: Ricorda che /tmp è montata su tutti i container MCP (fetch, filesystem, ecc.). Se scarichi un file con fetch in /tmp, il tool filesystem lo troverà immediatamente lì.
|
||||||
|
|
||||||
|
TOOLSET & WORKFLOW:
|
||||||
|
Utilizza i tuoi tool secondo questa logica:
|
||||||
|
1. sequentialthinking (Pianificazione): Usa questo tool PRIMA di ogni azione complessa. Suddividi la sfida in step logici (es. 1. Download, 2. Analisi Header, 3. Estrazione Flag). Ti aiuta a non perdere il filo durante task lunghi.
|
||||||
|
2. fetch (Ingestion): Usalo per recuperare binari, exploit o dati remoti. Salva l'output sempre in /tmp.
|
||||||
|
3. filesystem (Manipolazione): Usalo per ispezionare i file scaricati, creare script di exploit o leggere file di log e flag direttamente in /tmp.
|
||||||
|
4. memory (Stato): Utilizza questo tool per memorizzare scoperte chiave, indirizzi di memoria, offset o password trovate durante la sfida. Ti serve per mantenere il contesto tra diverse fasi del ragionamento.
|
||||||
|
|
||||||
|
METODOLOGIA DI ANALISI:
|
||||||
|
- Ipotesi e Test: Prima di agire, formula un'ipotesi basata sui dati presenti in /tmp.
|
||||||
|
- Verifica Intermedia: Dopo ogni comando o modifica ai file, verifica il risultato usando il tool filesystem. Non dare mai per scontato che un'operazione sia riuscita senza controllare.
|
||||||
|
- Pulizia Mentale: Se una strategia fallisce, usa sequentialthinking per rivedere il piano e aggiorna il tool memory con il motivo del fallimento per non ripetere lo stesso errore.
|
||||||
|
|
||||||
|
REGOLE DI COMUNICAZIONE:
|
||||||
|
- Sii estremamente tecnico, sintetico e preciso.
|
||||||
|
- Se un file non è presente in /tmp, non provare a indovinarne il contenuto; usa fetch per ottenerlo o filesystem per cercarlo.
|
||||||
|
- Rispondi con l'output delle tue analisi e l'eventuale flag trovata nel formato richiesto dalla sfida.
|
||||||
|
"""
|
||||||
|
'''
|
||||||
|
|
||||||
|
SYSTEM_PROMPT = "ROLE:\nSei un esperto Analista di Cybersecurity specializzato in CTF (Capture The Flag) e analisi di vulnerabilità. Operi in un ambiente Linux sandbox dove la tua unica area di lavoro è la directory /tmp.\n\nWORKSPACE CONSTRAINT: IL \"SINGLE SOURCE OF TRUTH\"\n- Obbligo Assoluto: Tutte le operazioni di lettura, scrittura e analisi devono avvenire esclusivamente all'interno di /tmp.\n- Percorsi: Ogni file deve essere referenziato con il percorso assoluto (es. /tmp/binary.bin). Non usare mai directory esterne a /tmp.\n- Condivisione: /tmp è montata su tutti i container MCP. I file creati o modificati da un tool sono immediatamente visibili agli altri.\n\nSTRETTO DIVIETO DI ALLUCINAZIONE TOOL:\n- USA ESCLUSIVAMENTE I TOOL MCP FORNITI: 'memory', 'filesystem'.\n- NON INVENTARE MAI TOOL INESISTENTI: È severamente vietato tentare di richiamare tool come \"run\", \"fetch\", \"execute_command\", \"shell\" o simili.\n- Se un tool non è in questa lista ('memory', 'filesystem'), NON esiste e non puoi usarlo.\n- Se senti la necessità di scaricare dati o eseguire comandi, ricorda che non hai tool per farlo; puoi solo operare sui file già presenti in /tmp tramite 'filesystem' o ragionare sugli stati tramite 'memory'.\n\nTOOLSET & WORKFLOW:\n1. memory (Pianificazione e Stato): È il tuo unico strumento di ragionamento e log. Usalo per definire il piano d'azione, suddividere la sfida in step e memorizzare scoperte (offset, password, indirizzi). Aggiorna la memoria prima di ogni azione.\n2. filesystem (Manipolazione): È il tuo unico strumento operativo. Usalo per ispezionare file esistenti, leggere contenuti, creare script o archiviare risultati esclusivamente in /tmp.\n\nMETODOLOGIA DI ANALISI:\n- Ragionamento Persistente: Documenta ogni ipotesi, passo logico e test nel tool memory.\n- Verifica Intermedia: Dopo ogni operazione sul filesystem, usa 'filesystem' per confermare che l'azione abbia prodotto il risultato atteso.\n- Gestione Errori: Se non trovi i file necessari in /tmp, segnalalo chiaramente senza provare a inventare tool per scaricarli o generarli.\n\nREGOLE DI COMUNICAZIONE:\n- Sii estremamente tecnico, sintetico e preciso.\n- Non fare mai riferimento a tool che non siano 'memory' o 'filesystem'."
|
||||||
|
|
||||||
|
class OllamaMcpClient:
|
||||||
|
"""MCP client that uses Ollama and LangChain create_agent with optional runtime MCP tools."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
ollamaClient: OllamaClient,
|
||||||
|
mcpTools: Optional[List[StructuredTool]] = None,
|
||||||
|
systemPrompt: Optional[str] = None,
|
||||||
|
):
|
||||||
|
self.ollamaClient = ollamaClient
|
||||||
|
self.mcpTools = mcpTools or []
|
||||||
|
self.systemPrompt = systemPrompt or SYSTEM_PROMPT
|
||||||
|
staticTools: List[Any] = [getTime, countWords]
|
||||||
|
staticToolNames = [getTime.name, countWords.name]
|
||||||
|
middleware: List[AgentMiddleware] = [LogToolCallsMiddleware()]
|
||||||
|
if self.mcpTools:
|
||||||
|
middleware.append(McpToolsMiddleware(self.mcpTools, staticToolNames=staticToolNames))
|
||||||
|
model = ChatOllama(
|
||||||
|
base_url=ollamaClient.baseUrl,
|
||||||
|
model=ollamaClient.model,
|
||||||
|
temperature=0.1,
|
||||||
|
)
|
||||||
|
self.agent = create_agent(
|
||||||
|
model,
|
||||||
|
tools=staticTools,
|
||||||
|
middleware=middleware,
|
||||||
|
system_prompt=self.systemPrompt,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def processRequest(self, prompt: str, context: Optional[List[str]] = None, recursionLimit: int = 50) -> str:
|
||||||
|
"""Process a request using the LangChain agent (ReAct loop with tools)."""
|
||||||
|
messages: List[Any] = [HumanMessage(content=prompt)]
|
||||||
|
if context:
|
||||||
|
messages.insert(0, SystemMessage(content=f"Context:\n{chr(10).join(context)}"))
|
||||||
|
config: Dict[str, Any] = {"recursion_limit": recursionLimit}
|
||||||
|
toolParseRetryPrompt = (
|
||||||
|
"ATTENZIONE: Una chiamata write_file ha prodotto JSON non valido. "
|
||||||
|
"Quando scrivi file con codice Python: usa \\n per le newline nel JSON, escapa le virgolette con \\. "
|
||||||
|
"Non aggiungere parametri extra (es. overwrite). Usa edit_file per modifiche incrementali se il contenuto è lungo."
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
result = await self.agent.ainvoke({"messages": messages}, config=config)
|
||||||
|
except OllamaResponseError as e:
|
||||||
|
errStr = str(e)
|
||||||
|
if "error parsing tool call" in errStr:
|
||||||
|
print(f"[Agent Error]: Tool call parse error, retrying with guidance: {errStr[:200]}...", file=sys.stderr)
|
||||||
|
retryMessages: List[Any] = [SystemMessage(content=toolParseRetryPrompt)]
|
||||||
|
retryMessages.extend(messages)
|
||||||
|
result = await self.agent.ainvoke({"messages": retryMessages}, config=config)
|
||||||
|
else:
|
||||||
|
print(f"[Agent Error]: {e}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
print(f"[Agent Error]: {e}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
return _extractFinalResponse(result)
|
||||||
|
|
||||||
|
def listTools(self) -> List[str]:
|
||||||
|
"""List tool names (static + MCP)."""
|
||||||
|
names = [getTime.name, countWords.name]
|
||||||
|
names.extend(t.name for t in self.mcpTools)
|
||||||
|
return names
|
||||||
|
|
||||||
|
|
||||||
|
async def async_main(args, ollamaClient: OllamaClient):
|
||||||
|
"""Async main: MCP tools come only from mcp.json (Docker containers exposing SSE). Ollama is used only as LLM."""
|
||||||
|
mcpTools: List[StructuredTool] = []
|
||||||
|
mcpServers: List[McpServerWrapper] = []
|
||||||
|
|
||||||
|
# MCP servers from config file (mcp.json) – Docker containers with SSE endpoints
|
||||||
|
serverUrls: Dict[str, str] = loadMcpConfig(args.mcp_config)
|
||||||
|
if args.mcp_server:
|
||||||
|
serverUrls["default"] = args.mcp_server.rstrip("/")
|
||||||
|
|
||||||
|
# Which servers to use: default = all from mcp.json; or --mcp-tools fetch,filesystem to pick a subset
|
||||||
|
wantServers = [s.strip() for s in (args.mcp_tools or "").split(",") if s.strip()]
|
||||||
|
if not wantServers and serverUrls:
|
||||||
|
wantServers = list(serverUrls.keys())
|
||||||
|
print(f"MCP tools from config (all SSE servers): {wantServers}", file=sys.stderr)
|
||||||
|
for name in wantServers:
|
||||||
|
url = serverUrls.get(name)
|
||||||
|
if not url:
|
||||||
|
print(f"Warning: MCP server '{name}' not in config (known: {list(serverUrls.keys())})", file=sys.stderr)
|
||||||
|
continue
|
||||||
|
wrapper = McpServerWrapper(httpUrl=url)
|
||||||
|
if await wrapper.connect():
|
||||||
|
mcpServers.append(wrapper)
|
||||||
|
print(f"Connected to MCP server '{name}' at {url}", file=sys.stderr)
|
||||||
|
else:
|
||||||
|
print(f"Error: Failed to connect to MCP server '{name}' at {url}", file=sys.stderr)
|
||||||
|
|
||||||
|
if mcpServers:
|
||||||
|
mcpTools = await buildMcpLangChainTools(mcpServers, toolTimeout=getattr(args, "tool_timeout", None))
|
||||||
|
#print(f"Loaded {len(mcpTools)} MCP tools: {[t.name for t in mcpTools]}", file=sys.stderr)
|
||||||
|
|
||||||
|
mcpClient = OllamaMcpClient(ollamaClient, mcpTools=mcpTools)
|
||||||
|
print(f"Agent tools: {mcpClient.listTools()}", file=sys.stderr)
|
||||||
|
|
||||||
|
if args.prompt:
|
||||||
|
response = await mcpClient.processRequest(args.prompt, recursionLimit=args.recursion_limit)
|
||||||
|
print(response)
|
||||||
|
elif args.interactive:
|
||||||
|
print("MCP Client with Ollama (LangChain agent) - Interactive Mode")
|
||||||
|
print("Type 'quit' or 'exit' to exit\n")
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
prompt = input("You: ").strip()
|
||||||
|
if prompt.lower() in ["quit", "exit"]:
|
||||||
|
break
|
||||||
|
if not prompt:
|
||||||
|
continue
|
||||||
|
response = await mcpClient.processRequest(prompt, recursionLimit=args.recursion_limit)
|
||||||
|
print(f"Assistant: {response}\n")
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nGoodbye!")
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
|
||||||
|
for wrapper in mcpServers:
|
||||||
|
await wrapper.disconnect()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main function to run the MCP client."""
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description="MCP client using Ollama")
|
||||||
|
parser.add_argument(
|
||||||
|
"--base-url",
|
||||||
|
default="http://localhost:11434",
|
||||||
|
help="Ollama base URL (default: http://localhost:11434)"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--model",
|
||||||
|
default="gpt-oss:20b",
|
||||||
|
help="Ollama model to use (default: ministral-3)"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--list-models",
|
||||||
|
action="store_true",
|
||||||
|
help="List available Ollama models and exit"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--prompt",
|
||||||
|
help="Prompt to send to the model"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--interactive",
|
||||||
|
"-i",
|
||||||
|
action="store_true",
|
||||||
|
help="Run in interactive mode"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--mcp-config",
|
||||||
|
default=None,
|
||||||
|
help="Path to mcp.json (default: mcpServer/mcp.json relative to project)"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--mcp-tools",
|
||||||
|
default="",
|
||||||
|
help="Comma-separated MCP server names from mcp.json (default: all servers in config). E.g. fetch,filesystem"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--mcp-server",
|
||||||
|
help="Override: single MCP SSE URL (e.g. http://localhost:3000/sse). Added as server 'default' in addition to mcp.json."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--mcp-headers",
|
||||||
|
help="Additional headers for MCP server as JSON string (e.g. '{\"Authorization\": \"Bearer token\"}')"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--recursion-limit",
|
||||||
|
type=int,
|
||||||
|
default=5000,
|
||||||
|
help="Max agent steps (model + tool calls) before stopping (default: 50)"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--tool-timeout",
|
||||||
|
type=float,
|
||||||
|
default=60,
|
||||||
|
help="Timeout in seconds for each MCP tool call. Prevents agent from freezing when a tool hangs (e.g. run with missing executable). Default: 60"
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Initialize Ollama client
|
||||||
|
ollamaClient = OllamaClient(baseUrl=args.base_url, model=args.model)
|
||||||
|
|
||||||
|
# Check health
|
||||||
|
if not ollamaClient.checkHealth():
|
||||||
|
print(f"Error: Cannot connect to Ollama at {args.base_url}", file=sys.stderr)
|
||||||
|
print("Make sure Ollama is running and accessible.", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# List models if requested
|
||||||
|
if args.list_models:
|
||||||
|
models = ollamaClient.listModels()
|
||||||
|
print("Available models:")
|
||||||
|
for model in models:
|
||||||
|
print(f" - {model}")
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# Run async main
|
||||||
|
asyncio.run(async_main(args, ollamaClient))
|
||||||
|
|
||||||
|
if not args.prompt and not args.interactive:
|
||||||
|
parser.print_help()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
8
mcpClient/requirements.txt
Normal file
8
mcpClient/requirements.txt
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
requests>=2.31.0
|
||||||
|
fastmcp>=0.9.0
|
||||||
|
langchain>=0.3.0
|
||||||
|
langchain-core>=0.3.0
|
||||||
|
langgraph>=0.2.0
|
||||||
|
langchain-community>=0.3.0
|
||||||
|
langchain-ollama>=0.2.0
|
||||||
|
pydantic>=2.0.0
|
||||||
39
mcpServer/.dockerignore
Normal file
39
mcpServer/.dockerignore
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Python
|
||||||
|
__pycache__/
|
||||||
|
*.py[cod]
|
||||||
|
*$py.class
|
||||||
|
*.so
|
||||||
|
.Python
|
||||||
|
*.egg-info/
|
||||||
|
dist/
|
||||||
|
build/
|
||||||
|
wheels/
|
||||||
|
|
||||||
|
# Virtual environments
|
||||||
|
venv/
|
||||||
|
.venv/
|
||||||
|
env/
|
||||||
|
ENV/
|
||||||
|
|
||||||
|
# Git
|
||||||
|
.git/
|
||||||
|
.gitignore
|
||||||
|
|
||||||
|
# IDE
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
*~
|
||||||
|
|
||||||
|
# OS
|
||||||
|
.DS_Store
|
||||||
|
Thumbs.db
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
Dockerfile
|
||||||
|
.dockerignore
|
||||||
|
docker-compose.yml
|
||||||
|
|
||||||
|
# Documentation
|
||||||
|
README.md
|
||||||
16
mcpServer/mcp.json
Normal file
16
mcpServer/mcp.json
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"url": "http://localhost:3000/sse"
|
||||||
|
},
|
||||||
|
"filesystem": {
|
||||||
|
"url": "http://localhost:3001/sse"
|
||||||
|
},
|
||||||
|
"memory": {
|
||||||
|
"url": "http://localhost:3002/sse"
|
||||||
|
},
|
||||||
|
"sequentialthinking": {
|
||||||
|
"url": "http://localhost:3003/sse"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
36
mcpServer/modules/docker-compose.yml
Normal file
36
mcpServer/modules/docker-compose.yml
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
services:
|
||||||
|
fetch:
|
||||||
|
build:
|
||||||
|
context: ./fetch
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
ports:
|
||||||
|
- "3000:3000"
|
||||||
|
environment:
|
||||||
|
MCP_PORT: 3000
|
||||||
|
|
||||||
|
filesystem:
|
||||||
|
build:
|
||||||
|
context: ./filesystem
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
ports:
|
||||||
|
- "3001:3001"
|
||||||
|
environment:
|
||||||
|
MCP_PORT: 3001
|
||||||
|
|
||||||
|
memory:
|
||||||
|
build:
|
||||||
|
context: ./memory
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
ports:
|
||||||
|
- "3002:3002"
|
||||||
|
environment:
|
||||||
|
MCP_PORT: 3002
|
||||||
|
|
||||||
|
sequentialthinking:
|
||||||
|
build:
|
||||||
|
context: ./sequentialthinking
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
ports:
|
||||||
|
- "3003:3003"
|
||||||
|
environment:
|
||||||
|
MCP_PORT: 3003
|
||||||
1
mcpServer/modules/fetch/.python-version
Normal file
1
mcpServer/modules/fetch/.python-version
Normal file
@@ -0,0 +1 @@
|
|||||||
|
3.11
|
||||||
33
mcpServer/modules/fetch/Dockerfile
Normal file
33
mcpServer/modules/fetch/Dockerfile
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# Use a Python image with uv pre-installed
|
||||||
|
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS uv
|
||||||
|
|
||||||
|
# Install the project into `/app`
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Enable bytecode compilation
|
||||||
|
ENV UV_COMPILE_BYTECODE=1
|
||||||
|
|
||||||
|
# Copy from the cache instead of linking since it's a mounted volume
|
||||||
|
ENV UV_LINK_MODE=copy
|
||||||
|
|
||||||
|
# Install the project's dependencies using the lockfile and settings
|
||||||
|
ADD . /app
|
||||||
|
RUN uv lock
|
||||||
|
RUN uv sync --locked --no-install-project --no-dev --no-editable
|
||||||
|
|
||||||
|
# Then, add the rest of the project source code and install it
|
||||||
|
# Installing separately from its dependencies allows optimal layer caching
|
||||||
|
RUN uv sync --locked --no-dev --no-editable
|
||||||
|
|
||||||
|
FROM python:3.12-slim-bookworm
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
COPY --from=uv /root/.local /root/.local
|
||||||
|
COPY --from=uv --chown=app:app /app/.venv /app/.venv
|
||||||
|
|
||||||
|
# Place executables in the environment at the front of the path
|
||||||
|
ENV PATH="/app/.venv/bin:$PATH"
|
||||||
|
|
||||||
|
# Bind to 0.0.0.0 so the SSE server is reachable from outside the container
|
||||||
|
ENTRYPOINT ["mcp-server-fetch", "--host", "0.0.0.0"]
|
||||||
241
mcpServer/modules/fetch/README.md
Normal file
241
mcpServer/modules/fetch/README.md
Normal file
@@ -0,0 +1,241 @@
|
|||||||
|
# Fetch MCP Server
|
||||||
|
|
||||||
|
<!-- mcp-name: io.github.modelcontextprotocol/server-fetch -->
|
||||||
|
|
||||||
|
A Model Context Protocol server that provides web content fetching capabilities. This server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.
|
||||||
|
|
||||||
|
> [!CAUTION]
|
||||||
|
> This server can access local/internal IP addresses and may represent a security risk. Exercise caution when using this MCP server to ensure this does not expose any sensitive data.
|
||||||
|
|
||||||
|
The fetch tool will truncate the response, but by using the `start_index` argument, you can specify where to start the content extraction. This lets models read a webpage in chunks, until they find the information they need.
|
||||||
|
|
||||||
|
### Available Tools
|
||||||
|
|
||||||
|
- `fetch` - Fetches a URL from the internet and extracts its contents as markdown.
|
||||||
|
- `url` (string, required): URL to fetch
|
||||||
|
- `max_length` (integer, optional): Maximum number of characters to return (default: 5000)
|
||||||
|
- `start_index` (integer, optional): Start content from this character index (default: 0)
|
||||||
|
- `raw` (boolean, optional): Get raw content without markdown conversion (default: false)
|
||||||
|
|
||||||
|
### Prompts
|
||||||
|
|
||||||
|
- **fetch**
|
||||||
|
- Fetch a URL and extract its contents as markdown
|
||||||
|
- Arguments:
|
||||||
|
- `url` (string, required): URL to fetch
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
Optionally: Install node.js, this will cause the fetch server to use a different HTML simplifier that is more robust.
|
||||||
|
|
||||||
|
### Using uv (recommended)
|
||||||
|
|
||||||
|
When using [`uv`](https://docs.astral.sh/uv/) no specific installation is needed. We will
|
||||||
|
use [`uvx`](https://docs.astral.sh/uv/guides/tools/) to directly run *mcp-server-fetch*.
|
||||||
|
|
||||||
|
### Using PIP
|
||||||
|
|
||||||
|
Alternatively you can install `mcp-server-fetch` via pip:
|
||||||
|
|
||||||
|
```
|
||||||
|
pip install mcp-server-fetch
|
||||||
|
```
|
||||||
|
|
||||||
|
After installation, you can run it as a script using:
|
||||||
|
|
||||||
|
```
|
||||||
|
python -m mcp_server_fetch
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Configure for Claude.app
|
||||||
|
|
||||||
|
Add to your Claude settings:
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using uvx</summary>
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "uvx",
|
||||||
|
"args": ["mcp-server-fetch"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using docker</summary>
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": ["run", "-i", "--rm", "mcp/fetch"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using pip installation</summary>
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "python",
|
||||||
|
"args": ["-m", "mcp_server_fetch"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
### Configure for VS Code
|
||||||
|
|
||||||
|
For quick installation, use one of the one-click install buttons below...
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=fetch&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22mcp-server-fetch%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=fetch&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22mcp-server-fetch%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=fetch&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22mcp%2Ffetch%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=fetch&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22mcp%2Ffetch%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.
|
||||||
|
|
||||||
|
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
|
||||||
|
|
||||||
|
> Note that the `mcp` key is needed when using the `mcp.json` file.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using uvx</summary>
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcp": {
|
||||||
|
"servers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "uvx",
|
||||||
|
"args": ["mcp-server-fetch"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using Docker</summary>
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcp": {
|
||||||
|
"servers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": ["run", "-i", "--rm", "mcp/fetch"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
### Customization - robots.txt
|
||||||
|
|
||||||
|
By default, the server will obey a websites robots.txt file if the request came from the model (via a tool), but not if
|
||||||
|
the request was user initiated (via a prompt). This can be disabled by adding the argument `--ignore-robots-txt` to the
|
||||||
|
`args` list in the configuration.
|
||||||
|
|
||||||
|
### Customization - User-agent
|
||||||
|
|
||||||
|
By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the
|
||||||
|
server will use either the user-agent
|
||||||
|
```
|
||||||
|
ModelContextProtocol/1.0 (Autonomous; +https://github.com/modelcontextprotocol/servers)
|
||||||
|
```
|
||||||
|
or
|
||||||
|
```
|
||||||
|
ModelContextProtocol/1.0 (User-Specified; +https://github.com/modelcontextprotocol/servers)
|
||||||
|
```
|
||||||
|
|
||||||
|
This can be customized by adding the argument `--user-agent=YourUserAgent` to the `args` list in the configuration.
|
||||||
|
|
||||||
|
### Customization - Proxy
|
||||||
|
|
||||||
|
The server can be configured to use a proxy by using the `--proxy-url` argument.
|
||||||
|
|
||||||
|
## Windows Configuration
|
||||||
|
|
||||||
|
If you're experiencing timeout issues on Windows, you may need to set the `PYTHONIOENCODING` environment variable to ensure proper character encoding:
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Windows configuration (uvx)</summary>
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "uvx",
|
||||||
|
"args": ["mcp-server-fetch"],
|
||||||
|
"env": {
|
||||||
|
"PYTHONIOENCODING": "utf-8"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Windows configuration (pip)</summary>
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"fetch": {
|
||||||
|
"command": "python",
|
||||||
|
"args": ["-m", "mcp_server_fetch"],
|
||||||
|
"env": {
|
||||||
|
"PYTHONIOENCODING": "utf-8"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
This addresses character encoding issues that can cause the server to timeout on Windows systems.
|
||||||
|
|
||||||
|
## Debugging
|
||||||
|
|
||||||
|
You can use the MCP inspector to debug the server. For uvx installations:
|
||||||
|
|
||||||
|
```
|
||||||
|
npx @modelcontextprotocol/inspector uvx mcp-server-fetch
|
||||||
|
```
|
||||||
|
|
||||||
|
Or if you've installed the package in a specific directory or are developing on it:
|
||||||
|
|
||||||
|
```
|
||||||
|
cd path/to/servers/src/fetch
|
||||||
|
npx @modelcontextprotocol/inspector uv run mcp-server-fetch
|
||||||
|
```
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
We encourage contributions to help expand and improve mcp-server-fetch. Whether you want to add new tools, enhance existing functionality, or improve documentation, your input is valuable.
|
||||||
|
|
||||||
|
For examples of other MCP servers and implementation patterns, see:
|
||||||
|
https://github.com/modelcontextprotocol/servers
|
||||||
|
|
||||||
|
Pull requests are welcome! Feel free to contribute new ideas, bug fixes, or enhancements to make mcp-server-fetch even more powerful and useful.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
mcp-server-fetch is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
|
||||||
42
mcpServer/modules/fetch/pyproject.toml
Normal file
42
mcpServer/modules/fetch/pyproject.toml
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
[project]
|
||||||
|
name = "mcp-server-fetch"
|
||||||
|
version = "0.6.3"
|
||||||
|
description = "A Model Context Protocol server providing tools to fetch and convert web content for usage by LLMs"
|
||||||
|
readme = "README.md"
|
||||||
|
requires-python = ">=3.10"
|
||||||
|
authors = [{ name = "Anthropic, PBC." }]
|
||||||
|
maintainers = [{ name = "Jack Adamson", email = "jadamson@anthropic.com" }]
|
||||||
|
keywords = ["http", "mcp", "llm", "automation"]
|
||||||
|
license = { text = "MIT" }
|
||||||
|
classifiers = [
|
||||||
|
"Development Status :: 4 - Beta",
|
||||||
|
"Intended Audience :: Developers",
|
||||||
|
"License :: OSI Approved :: MIT License",
|
||||||
|
"Programming Language :: Python :: 3",
|
||||||
|
"Programming Language :: Python :: 3.10",
|
||||||
|
]
|
||||||
|
dependencies = [
|
||||||
|
"httpx<0.28",
|
||||||
|
"markdownify>=0.13.1",
|
||||||
|
"mcp>=1.1.3",
|
||||||
|
"protego>=0.3.1",
|
||||||
|
"pydantic>=2.0.0",
|
||||||
|
"readabilipy>=0.2.0",
|
||||||
|
"requests>=2.32.3",
|
||||||
|
"starlette>=0.38.0",
|
||||||
|
"uvicorn>=0.30.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.scripts]
|
||||||
|
mcp-server-fetch = "mcp_server_fetch:main"
|
||||||
|
|
||||||
|
[build-system]
|
||||||
|
requires = ["hatchling"]
|
||||||
|
build-backend = "hatchling.build"
|
||||||
|
|
||||||
|
[depencency-groups]
|
||||||
|
dev = ["pyright>=1.1.389", "ruff>=0.7.3", "pytest>=8.0.0", "pytest-asyncio>=0.21.0"]
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
testpaths = ["tests"]
|
||||||
|
asyncio_mode = "auto"
|
||||||
49
mcpServer/modules/fetch/src/mcp_server_fetch/__init__.py
Normal file
49
mcpServer/modules/fetch/src/mcp_server_fetch/__init__.py
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
from .server import serve
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""MCP Fetch Server - HTTP fetching functionality for MCP"""
|
||||||
|
import argparse
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="give a model the ability to make web requests"
|
||||||
|
)
|
||||||
|
parser.add_argument("--user-agent", type=str, help="Custom User-Agent string")
|
||||||
|
parser.add_argument(
|
||||||
|
"--ignore-robots-txt",
|
||||||
|
action="store_true",
|
||||||
|
help="Ignore robots.txt restrictions",
|
||||||
|
)
|
||||||
|
parser.add_argument("--proxy-url", type=str, help="Proxy URL to use for requests")
|
||||||
|
parser.add_argument(
|
||||||
|
"--port",
|
||||||
|
type=int,
|
||||||
|
default=None,
|
||||||
|
help="Port for SSE server (default: from MCP_PORT or SSE_PORT env, else 3000)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--host",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="Host to bind the SSE server to (default: MCP_HOST env or 127.0.0.1)",
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
import os
|
||||||
|
|
||||||
|
port = args.port
|
||||||
|
if port is None:
|
||||||
|
port = int(os.environ.get("MCP_PORT") or os.environ.get("SSE_PORT") or "3000")
|
||||||
|
host = args.host or os.environ.get("MCP_HOST") or "127.0.0.1"
|
||||||
|
serve(
|
||||||
|
custom_user_agent=args.user_agent,
|
||||||
|
ignore_robots_txt=args.ignore_robots_txt,
|
||||||
|
proxy_url=args.proxy_url,
|
||||||
|
port=port,
|
||||||
|
host=host,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
5
mcpServer/modules/fetch/src/mcp_server_fetch/__main__.py
Normal file
5
mcpServer/modules/fetch/src/mcp_server_fetch/__main__.py
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# __main__.py
|
||||||
|
|
||||||
|
from mcp_server_fetch import main
|
||||||
|
|
||||||
|
main()
|
||||||
350
mcpServer/modules/fetch/src/mcp_server_fetch/server.py
Normal file
350
mcpServer/modules/fetch/src/mcp_server_fetch/server.py
Normal file
@@ -0,0 +1,350 @@
|
|||||||
|
from typing import Annotated, Tuple
|
||||||
|
from urllib.parse import urlparse, urlunparse
|
||||||
|
|
||||||
|
import markdownify
|
||||||
|
import readabilipy.simple_json
|
||||||
|
from mcp.shared.exceptions import McpError
|
||||||
|
from mcp.server import Server
|
||||||
|
from mcp.server.sse import SseServerTransport
|
||||||
|
from mcp.types import (
|
||||||
|
ErrorData,
|
||||||
|
GetPromptResult,
|
||||||
|
Prompt,
|
||||||
|
PromptArgument,
|
||||||
|
PromptMessage,
|
||||||
|
TextContent,
|
||||||
|
Tool,
|
||||||
|
INVALID_PARAMS,
|
||||||
|
INTERNAL_ERROR,
|
||||||
|
)
|
||||||
|
from protego import Protego
|
||||||
|
from pydantic import BaseModel, Field, AnyUrl
|
||||||
|
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS = "Cariddi/1.0 (Autonomous; +https://git.andreagordanelli.com/Schrody/CariddiCTF)"
|
||||||
|
DEFAULT_USER_AGENT_MANUAL = "Cariddi/1.0 (User-Specified; +https://git.andreagordanelli.com/Schrody/CariddiCTF)"
|
||||||
|
|
||||||
|
|
||||||
|
def extract_content_from_html(html: str) -> str:
|
||||||
|
"""Extract and convert HTML content to Markdown format.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
html: Raw HTML content to process
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Simplified markdown version of the content
|
||||||
|
"""
|
||||||
|
ret = readabilipy.simple_json.simple_json_from_html_string(
|
||||||
|
html, use_readability=True
|
||||||
|
)
|
||||||
|
if not ret["content"]:
|
||||||
|
return "<error>Page failed to be simplified from HTML</error>"
|
||||||
|
content = markdownify.markdownify(
|
||||||
|
ret["content"],
|
||||||
|
heading_style=markdownify.ATX,
|
||||||
|
)
|
||||||
|
return content
|
||||||
|
|
||||||
|
|
||||||
|
def get_robots_txt_url(url: str) -> str:
|
||||||
|
"""Get the robots.txt URL for a given website URL.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
url: Website URL to get robots.txt for
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
URL of the robots.txt file
|
||||||
|
"""
|
||||||
|
# Parse the URL into components
|
||||||
|
parsed = urlparse(url)
|
||||||
|
|
||||||
|
# Reconstruct the base URL with just scheme, netloc, and /robots.txt path
|
||||||
|
robots_url = urlunparse((parsed.scheme, parsed.netloc, "/robots.txt", "", "", ""))
|
||||||
|
|
||||||
|
return robots_url
|
||||||
|
|
||||||
|
|
||||||
|
async def check_may_autonomously_fetch_url(url: str, user_agent: str, proxy_url: str | None = None) -> None:
|
||||||
|
"""
|
||||||
|
Check if the URL can be fetched by the user agent according to the robots.txt file.
|
||||||
|
Raises a McpError if not.
|
||||||
|
"""
|
||||||
|
from httpx import AsyncClient, HTTPError
|
||||||
|
|
||||||
|
robot_txt_url = get_robots_txt_url(url)
|
||||||
|
|
||||||
|
async with AsyncClient(proxies=proxy_url) as client:
|
||||||
|
try:
|
||||||
|
response = await client.get(
|
||||||
|
robot_txt_url,
|
||||||
|
follow_redirects=True,
|
||||||
|
headers={"User-Agent": user_agent},
|
||||||
|
)
|
||||||
|
except HTTPError:
|
||||||
|
raise McpError(ErrorData(
|
||||||
|
code=INTERNAL_ERROR,
|
||||||
|
message=f"Failed to fetch robots.txt {robot_txt_url} due to a connection issue",
|
||||||
|
))
|
||||||
|
if response.status_code in (401, 403):
|
||||||
|
raise McpError(ErrorData(
|
||||||
|
code=INTERNAL_ERROR,
|
||||||
|
message=f"When fetching robots.txt ({robot_txt_url}), received status {response.status_code} so assuming that autonomous fetching is not allowed, the user can try manually fetching by using the fetch prompt",
|
||||||
|
))
|
||||||
|
elif 400 <= response.status_code < 500:
|
||||||
|
return
|
||||||
|
robot_txt = response.text
|
||||||
|
processed_robot_txt = "\n".join(
|
||||||
|
line for line in robot_txt.splitlines() if not line.strip().startswith("#")
|
||||||
|
)
|
||||||
|
robot_parser = Protego.parse(processed_robot_txt)
|
||||||
|
if not robot_parser.can_fetch(str(url), user_agent):
|
||||||
|
raise McpError(ErrorData(
|
||||||
|
code=INTERNAL_ERROR,
|
||||||
|
message=f"The sites robots.txt ({robot_txt_url}), specifies that autonomous fetching of this page is not allowed, "
|
||||||
|
f"<useragent>{user_agent}</useragent>\n"
|
||||||
|
f"<url>{url}</url>"
|
||||||
|
f"<robots>\n{robot_txt}\n</robots>\n"
|
||||||
|
f"The assistant must let the user know that it failed to view the page. The assistant may provide further guidance based on the above information.\n"
|
||||||
|
f"The assistant can tell the user that they can try manually fetching the page by using the fetch prompt within their UI.",
|
||||||
|
))
|
||||||
|
|
||||||
|
|
||||||
|
async def fetch_url(
|
||||||
|
url: str, user_agent: str, force_raw: bool = False, proxy_url: str | None = None
|
||||||
|
) -> Tuple[str, str]:
|
||||||
|
"""
|
||||||
|
Fetch the URL and return the content in a form ready for the LLM, as well as a prefix string with status information.
|
||||||
|
"""
|
||||||
|
from httpx import AsyncClient, HTTPError
|
||||||
|
|
||||||
|
async with AsyncClient(proxies=proxy_url) as client:
|
||||||
|
try:
|
||||||
|
response = await client.get(
|
||||||
|
url,
|
||||||
|
follow_redirects=True,
|
||||||
|
headers={"User-Agent": user_agent},
|
||||||
|
timeout=30,
|
||||||
|
)
|
||||||
|
except HTTPError as e:
|
||||||
|
raise McpError(ErrorData(code=INTERNAL_ERROR, message=f"Failed to fetch {url}: {e!r}"))
|
||||||
|
if response.status_code >= 400:
|
||||||
|
raise McpError(ErrorData(
|
||||||
|
code=INTERNAL_ERROR,
|
||||||
|
message=f"Failed to fetch {url} - status code {response.status_code}",
|
||||||
|
))
|
||||||
|
|
||||||
|
page_raw = response.text
|
||||||
|
|
||||||
|
content_type = response.headers.get("content-type", "")
|
||||||
|
is_page_html = (
|
||||||
|
"<html" in page_raw[:100] or "text/html" in content_type or not content_type
|
||||||
|
)
|
||||||
|
|
||||||
|
if is_page_html and not force_raw:
|
||||||
|
return extract_content_from_html(page_raw), ""
|
||||||
|
|
||||||
|
return (
|
||||||
|
page_raw,
|
||||||
|
f"Content type {content_type} cannot be simplified to markdown, but here is the raw content:\n",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class Fetch(BaseModel):
|
||||||
|
"""Parameters for fetching a URL."""
|
||||||
|
|
||||||
|
url: Annotated[AnyUrl, Field(description="URL to fetch")]
|
||||||
|
max_length: Annotated[
|
||||||
|
int,
|
||||||
|
Field(
|
||||||
|
default=5000,
|
||||||
|
description="Maximum number of characters to return.",
|
||||||
|
gt=0,
|
||||||
|
lt=1000000,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
start_index: Annotated[
|
||||||
|
int,
|
||||||
|
Field(
|
||||||
|
default=0,
|
||||||
|
description="On return output starting at this character index, useful if a previous fetch was truncated and more context is required.",
|
||||||
|
ge=0,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
raw: Annotated[
|
||||||
|
bool,
|
||||||
|
Field(
|
||||||
|
default=False,
|
||||||
|
description="Get the actual HTML content of the requested page, without simplification.",
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def createServer(
|
||||||
|
custom_user_agent: str | None = None,
|
||||||
|
ignore_robots_txt: bool = False,
|
||||||
|
proxy_url: str | None = None,
|
||||||
|
) -> Tuple[Server, dict]:
|
||||||
|
"""Create and configure the fetch MCP server (tools, prompts, options).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (Server instance, initialization options dict).
|
||||||
|
"""
|
||||||
|
server = Server("mcp-fetch")
|
||||||
|
user_agent_autonomous = custom_user_agent or DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
user_agent_manual = custom_user_agent or DEFAULT_USER_AGENT_MANUAL
|
||||||
|
|
||||||
|
@server.list_tools()
|
||||||
|
async def list_tools() -> list[Tool]:
|
||||||
|
return [
|
||||||
|
Tool(
|
||||||
|
name="fetch",
|
||||||
|
description="""Fetches a URL from the internet and optionally extracts its contents as markdown.
|
||||||
|
|
||||||
|
Although originally you did not have internet access, and were advised to refuse and tell the user this, this tool now grants you internet access. Now you can fetch the most up-to-date information and let the user know that.""",
|
||||||
|
inputSchema=Fetch.model_json_schema(),
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
@server.list_prompts()
|
||||||
|
async def list_prompts() -> list[Prompt]:
|
||||||
|
return [
|
||||||
|
Prompt(
|
||||||
|
name="fetch",
|
||||||
|
description="Fetch a URL and extract its contents as markdown",
|
||||||
|
arguments=[
|
||||||
|
PromptArgument(
|
||||||
|
name="url", description="URL to fetch", required=True
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
@server.call_tool()
|
||||||
|
async def call_tool(name, arguments: dict) -> list[TextContent]:
|
||||||
|
try:
|
||||||
|
args = Fetch(**arguments)
|
||||||
|
except ValueError as e:
|
||||||
|
raise McpError(ErrorData(code=INVALID_PARAMS, message=str(e)))
|
||||||
|
|
||||||
|
url = str(args.url)
|
||||||
|
if not url:
|
||||||
|
raise McpError(ErrorData(code=INVALID_PARAMS, message="URL is required"))
|
||||||
|
|
||||||
|
if not ignore_robots_txt:
|
||||||
|
await check_may_autonomously_fetch_url(url, user_agent_autonomous, proxy_url)
|
||||||
|
|
||||||
|
content, prefix = await fetch_url(
|
||||||
|
url, user_agent_autonomous, force_raw=args.raw, proxy_url=proxy_url
|
||||||
|
)
|
||||||
|
original_length = len(content)
|
||||||
|
if args.start_index >= original_length:
|
||||||
|
content = "<error>No more content available.</error>"
|
||||||
|
else:
|
||||||
|
truncated_content = content[args.start_index : args.start_index + args.max_length]
|
||||||
|
if not truncated_content:
|
||||||
|
content = "<error>No more content available.</error>"
|
||||||
|
else:
|
||||||
|
content = truncated_content
|
||||||
|
actual_content_length = len(truncated_content)
|
||||||
|
remaining_content = original_length - (args.start_index + actual_content_length)
|
||||||
|
# Only add the prompt to continue fetching if there is still remaining content
|
||||||
|
if actual_content_length == args.max_length and remaining_content > 0:
|
||||||
|
next_start = args.start_index + actual_content_length
|
||||||
|
content += f"\n\n<error>Content truncated. Call the fetch tool with a start_index of {next_start} to get more content.</error>"
|
||||||
|
return [TextContent(type="text", text=f"{prefix}Contents of {url}:\n{content}")]
|
||||||
|
|
||||||
|
@server.get_prompt()
|
||||||
|
async def get_prompt(name: str, arguments: dict | None) -> GetPromptResult:
|
||||||
|
if not arguments or "url" not in arguments:
|
||||||
|
raise McpError(ErrorData(code=INVALID_PARAMS, message="URL is required"))
|
||||||
|
|
||||||
|
url = arguments["url"]
|
||||||
|
|
||||||
|
try:
|
||||||
|
content, prefix = await fetch_url(url, user_agent_manual, proxy_url=proxy_url)
|
||||||
|
# TODO: after SDK bug is addressed, don't catch the exception
|
||||||
|
except McpError as e:
|
||||||
|
return GetPromptResult(
|
||||||
|
description=f"Failed to fetch {url}",
|
||||||
|
messages=[
|
||||||
|
PromptMessage(
|
||||||
|
role="user",
|
||||||
|
content=TextContent(type="text", text=str(e)),
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)
|
||||||
|
return GetPromptResult(
|
||||||
|
description=f"Contents of {url}",
|
||||||
|
messages=[
|
||||||
|
PromptMessage(
|
||||||
|
role="user", content=TextContent(type="text", text=prefix + content)
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
return server, server.create_initialization_options()
|
||||||
|
|
||||||
|
|
||||||
|
def serve(
|
||||||
|
custom_user_agent: str | None = None,
|
||||||
|
ignore_robots_txt: bool = False,
|
||||||
|
proxy_url: str | None = None,
|
||||||
|
port: int = 3000,
|
||||||
|
host: str = "0.0.0.0",
|
||||||
|
) -> None:
|
||||||
|
"""Run the fetch MCP server over SSE.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
custom_user_agent: Optional custom User-Agent string to use for requests
|
||||||
|
ignore_robots_txt: Whether to ignore robots.txt restrictions
|
||||||
|
proxy_url: Optional proxy URL to use for requests
|
||||||
|
port: Port for the SSE HTTP server
|
||||||
|
host: Host to bind the SSE server to
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
from starlette.applications import Starlette
|
||||||
|
from starlette.requests import Request
|
||||||
|
from starlette.responses import Response
|
||||||
|
from starlette.routing import Mount, Route
|
||||||
|
|
||||||
|
def buildApp():
|
||||||
|
server, options = createServer(
|
||||||
|
custom_user_agent, ignore_robots_txt, proxy_url
|
||||||
|
)
|
||||||
|
sse = SseServerTransport("/messages/")
|
||||||
|
|
||||||
|
async def handleSse(request: Request) -> Response:
|
||||||
|
async with sse.connect_sse(
|
||||||
|
request.scope, request.receive, request._send
|
||||||
|
) as streams:
|
||||||
|
await server.run(
|
||||||
|
streams[0], streams[1], options, raise_exceptions=True
|
||||||
|
)
|
||||||
|
return Response()
|
||||||
|
|
||||||
|
routes = [
|
||||||
|
Route("/sse", endpoint=handleSse, methods=["GET"]),
|
||||||
|
Route("/", endpoint=handleSse, methods=["GET"]),
|
||||||
|
Mount("/messages/", app=sse.handle_post_message),
|
||||||
|
]
|
||||||
|
return Starlette(routes=routes)
|
||||||
|
|
||||||
|
async def run():
|
||||||
|
app = buildApp()
|
||||||
|
import uvicorn
|
||||||
|
|
||||||
|
config = uvicorn.Config(app, host=host, port=port, log_level="info")
|
||||||
|
server_uv = uvicorn.Server(config)
|
||||||
|
await server_uv.serve()
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.stderr.write(
|
||||||
|
f"MCP Fetch Server running on SSE at http://{host}:{port}\n"
|
||||||
|
)
|
||||||
|
sys.stderr.write(" GET /sse or / – open SSE stream\n")
|
||||||
|
sys.stderr.write(
|
||||||
|
" POST /messages/?session_id=<id> – send MCP messages\n"
|
||||||
|
)
|
||||||
|
sys.stderr.flush()
|
||||||
|
asyncio.run(run())
|
||||||
0
mcpServer/modules/fetch/tests/__init__.py
Normal file
0
mcpServer/modules/fetch/tests/__init__.py
Normal file
326
mcpServer/modules/fetch/tests/test_server.py
Normal file
326
mcpServer/modules/fetch/tests/test_server.py
Normal file
@@ -0,0 +1,326 @@
|
|||||||
|
"""Tests for the fetch MCP server."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import AsyncMock, patch, MagicMock
|
||||||
|
from mcp.shared.exceptions import McpError
|
||||||
|
|
||||||
|
from mcp_server_fetch.server import (
|
||||||
|
extract_content_from_html,
|
||||||
|
get_robots_txt_url,
|
||||||
|
check_may_autonomously_fetch_url,
|
||||||
|
fetch_url,
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetRobotsTxtUrl:
|
||||||
|
"""Tests for get_robots_txt_url function."""
|
||||||
|
|
||||||
|
def test_simple_url(self):
|
||||||
|
"""Test with a simple URL."""
|
||||||
|
result = get_robots_txt_url("https://example.com/page")
|
||||||
|
assert result == "https://example.com/robots.txt"
|
||||||
|
|
||||||
|
def test_url_with_path(self):
|
||||||
|
"""Test with URL containing path."""
|
||||||
|
result = get_robots_txt_url("https://example.com/some/deep/path/page.html")
|
||||||
|
assert result == "https://example.com/robots.txt"
|
||||||
|
|
||||||
|
def test_url_with_query_params(self):
|
||||||
|
"""Test with URL containing query parameters."""
|
||||||
|
result = get_robots_txt_url("https://example.com/page?foo=bar&baz=qux")
|
||||||
|
assert result == "https://example.com/robots.txt"
|
||||||
|
|
||||||
|
def test_url_with_port(self):
|
||||||
|
"""Test with URL containing port number."""
|
||||||
|
result = get_robots_txt_url("https://example.com:8080/page")
|
||||||
|
assert result == "https://example.com:8080/robots.txt"
|
||||||
|
|
||||||
|
def test_url_with_fragment(self):
|
||||||
|
"""Test with URL containing fragment."""
|
||||||
|
result = get_robots_txt_url("https://example.com/page#section")
|
||||||
|
assert result == "https://example.com/robots.txt"
|
||||||
|
|
||||||
|
def test_http_url(self):
|
||||||
|
"""Test with HTTP URL."""
|
||||||
|
result = get_robots_txt_url("http://example.com/page")
|
||||||
|
assert result == "http://example.com/robots.txt"
|
||||||
|
|
||||||
|
|
||||||
|
class TestExtractContentFromHtml:
|
||||||
|
"""Tests for extract_content_from_html function."""
|
||||||
|
|
||||||
|
def test_simple_html(self):
|
||||||
|
"""Test with simple HTML content."""
|
||||||
|
html = """
|
||||||
|
<html>
|
||||||
|
<head><title>Test Page</title></head>
|
||||||
|
<body>
|
||||||
|
<article>
|
||||||
|
<h1>Hello World</h1>
|
||||||
|
<p>This is a test paragraph.</p>
|
||||||
|
</article>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
"""
|
||||||
|
result = extract_content_from_html(html)
|
||||||
|
# readabilipy may extract different parts depending on the content
|
||||||
|
assert "test paragraph" in result
|
||||||
|
|
||||||
|
def test_html_with_links(self):
|
||||||
|
"""Test that links are converted to markdown."""
|
||||||
|
html = """
|
||||||
|
<html>
|
||||||
|
<body>
|
||||||
|
<article>
|
||||||
|
<p>Visit <a href="https://example.com">Example</a> for more.</p>
|
||||||
|
</article>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
"""
|
||||||
|
result = extract_content_from_html(html)
|
||||||
|
assert "Example" in result
|
||||||
|
|
||||||
|
def test_empty_content_returns_error(self):
|
||||||
|
"""Test that empty/invalid HTML returns error message."""
|
||||||
|
html = ""
|
||||||
|
result = extract_content_from_html(html)
|
||||||
|
assert "<error>" in result
|
||||||
|
|
||||||
|
|
||||||
|
class TestCheckMayAutonomouslyFetchUrl:
|
||||||
|
"""Tests for check_may_autonomously_fetch_url function."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_allows_when_robots_txt_404(self):
|
||||||
|
"""Test that fetching is allowed when robots.txt returns 404."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 404
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
# Should not raise
|
||||||
|
await check_may_autonomously_fetch_url(
|
||||||
|
"https://example.com/page",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_blocks_when_robots_txt_401(self):
|
||||||
|
"""Test that fetching is blocked when robots.txt returns 401."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 401
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
with pytest.raises(McpError):
|
||||||
|
await check_may_autonomously_fetch_url(
|
||||||
|
"https://example.com/page",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_blocks_when_robots_txt_403(self):
|
||||||
|
"""Test that fetching is blocked when robots.txt returns 403."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 403
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
with pytest.raises(McpError):
|
||||||
|
await check_may_autonomously_fetch_url(
|
||||||
|
"https://example.com/page",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_allows_when_robots_txt_allows_all(self):
|
||||||
|
"""Test that fetching is allowed when robots.txt allows all."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_response.text = "User-agent: *\nAllow: /"
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
# Should not raise
|
||||||
|
await check_may_autonomously_fetch_url(
|
||||||
|
"https://example.com/page",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_blocks_when_robots_txt_disallows_all(self):
|
||||||
|
"""Test that fetching is blocked when robots.txt disallows all."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_response.text = "User-agent: *\nDisallow: /"
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
with pytest.raises(McpError):
|
||||||
|
await check_may_autonomously_fetch_url(
|
||||||
|
"https://example.com/page",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestFetchUrl:
|
||||||
|
"""Tests for fetch_url function."""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_fetch_html_page(self):
|
||||||
|
"""Test fetching an HTML page returns markdown content."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_response.text = """
|
||||||
|
<html>
|
||||||
|
<body>
|
||||||
|
<article>
|
||||||
|
<h1>Test Page</h1>
|
||||||
|
<p>Hello World</p>
|
||||||
|
</article>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
"""
|
||||||
|
mock_response.headers = {"content-type": "text/html"}
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
content, prefix = await fetch_url(
|
||||||
|
"https://example.com/page",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
# HTML is processed, so we check it returns something
|
||||||
|
assert isinstance(content, str)
|
||||||
|
assert prefix == ""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_fetch_html_page_raw(self):
|
||||||
|
"""Test fetching an HTML page with raw=True returns original HTML."""
|
||||||
|
html_content = "<html><body><h1>Test</h1></body></html>"
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_response.text = html_content
|
||||||
|
mock_response.headers = {"content-type": "text/html"}
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
content, prefix = await fetch_url(
|
||||||
|
"https://example.com/page",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS,
|
||||||
|
force_raw=True
|
||||||
|
)
|
||||||
|
|
||||||
|
assert content == html_content
|
||||||
|
assert "cannot be simplified" in prefix
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_fetch_json_returns_raw(self):
|
||||||
|
"""Test fetching JSON content returns raw content."""
|
||||||
|
json_content = '{"key": "value"}'
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_response.text = json_content
|
||||||
|
mock_response.headers = {"content-type": "application/json"}
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
content, prefix = await fetch_url(
|
||||||
|
"https://api.example.com/data",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
assert content == json_content
|
||||||
|
assert "cannot be simplified" in prefix
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_fetch_404_raises_error(self):
|
||||||
|
"""Test that 404 response raises McpError."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 404
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
with pytest.raises(McpError):
|
||||||
|
await fetch_url(
|
||||||
|
"https://example.com/notfound",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_fetch_500_raises_error(self):
|
||||||
|
"""Test that 500 response raises McpError."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 500
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
with pytest.raises(McpError):
|
||||||
|
await fetch_url(
|
||||||
|
"https://example.com/error",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS
|
||||||
|
)
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_fetch_with_proxy(self):
|
||||||
|
"""Test that proxy URL is passed to client."""
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.status_code = 200
|
||||||
|
mock_response.text = '{"data": "test"}'
|
||||||
|
mock_response.headers = {"content-type": "application/json"}
|
||||||
|
|
||||||
|
with patch("httpx.AsyncClient") as mock_client_class:
|
||||||
|
mock_client = AsyncMock()
|
||||||
|
mock_client.get = AsyncMock(return_value=mock_response)
|
||||||
|
mock_client_class.return_value.__aenter__ = AsyncMock(return_value=mock_client)
|
||||||
|
mock_client_class.return_value.__aexit__ = AsyncMock(return_value=None)
|
||||||
|
|
||||||
|
await fetch_url(
|
||||||
|
"https://example.com/data",
|
||||||
|
DEFAULT_USER_AGENT_AUTONOMOUS,
|
||||||
|
proxy_url="http://proxy.example.com:8080"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify AsyncClient was called with proxy
|
||||||
|
mock_client_class.assert_called_once_with(proxies="http://proxy.example.com:8080")
|
||||||
1285
mcpServer/modules/fetch/uv.lock
generated
Normal file
1285
mcpServer/modules/fetch/uv.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
25
mcpServer/modules/filesystem/Dockerfile
Normal file
25
mcpServer/modules/filesystem/Dockerfile
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
FROM node:22.12-alpine AS builder
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
COPY . /app
|
||||||
|
COPY tsconfig.json /tsconfig.json
|
||||||
|
|
||||||
|
RUN npm install
|
||||||
|
|
||||||
|
RUN npm ci --ignore-scripts --omit-dev
|
||||||
|
|
||||||
|
|
||||||
|
FROM node:22-alpine AS release
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
COPY --from=builder /app/dist /app/dist
|
||||||
|
COPY --from=builder /app/package.json /app/package.json
|
||||||
|
COPY --from=builder /app/package-lock.json /app/package-lock.json
|
||||||
|
|
||||||
|
ENV NODE_ENV=production
|
||||||
|
|
||||||
|
RUN npm ci --ignore-scripts --omit-dev
|
||||||
|
|
||||||
|
ENTRYPOINT ["node", "/app/dist/index.js"]
|
||||||
321
mcpServer/modules/filesystem/README.md
Normal file
321
mcpServer/modules/filesystem/README.md
Normal file
@@ -0,0 +1,321 @@
|
|||||||
|
# Filesystem MCP Server
|
||||||
|
|
||||||
|
Node.js server implementing Model Context Protocol (MCP) for filesystem operations.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Read/write files
|
||||||
|
- Create/list/delete directories
|
||||||
|
- Move files/directories
|
||||||
|
- Search files
|
||||||
|
- Get file metadata
|
||||||
|
- Dynamic directory access control via [Roots](https://modelcontextprotocol.io/docs/learn/client-concepts#roots)
|
||||||
|
|
||||||
|
## Directory Access Control
|
||||||
|
|
||||||
|
The server uses a flexible directory access control system. Directories can be specified via command-line arguments or dynamically via [Roots](https://modelcontextprotocol.io/docs/learn/client-concepts#roots).
|
||||||
|
|
||||||
|
### Method 1: Command-line Arguments
|
||||||
|
Specify Allowed directories when starting the server:
|
||||||
|
```bash
|
||||||
|
mcp-server-filesystem /path/to/dir1 /path/to/dir2
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: MCP Roots (Recommended)
|
||||||
|
MCP clients that support [Roots](https://modelcontextprotocol.io/docs/learn/client-concepts#roots) can dynamically update the Allowed directories.
|
||||||
|
|
||||||
|
Roots notified by Client to Server, completely replace any server-side Allowed directories when provided.
|
||||||
|
|
||||||
|
**Important**: If server starts without command-line arguments AND client doesn't support roots protocol (or provides empty roots), the server will throw an error during initialization.
|
||||||
|
|
||||||
|
This is the recommended method, as this enables runtime directory updates via `roots/list_changed` notifications without server restart, providing a more flexible and modern integration experience.
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
The server's directory access control follows this flow:
|
||||||
|
|
||||||
|
1. **Server Startup**
|
||||||
|
- Server starts with directories from command-line arguments (if provided)
|
||||||
|
- If no arguments provided, server starts with empty allowed directories
|
||||||
|
|
||||||
|
2. **Client Connection & Initialization**
|
||||||
|
- Client connects and sends `initialize` request with capabilities
|
||||||
|
- Server checks if client supports roots protocol (`capabilities.roots`)
|
||||||
|
|
||||||
|
3. **Roots Protocol Handling** (if client supports roots)
|
||||||
|
- **On initialization**: Server requests roots from client via `roots/list`
|
||||||
|
- Client responds with its configured roots
|
||||||
|
- Server replaces ALL allowed directories with client's roots
|
||||||
|
- **On runtime updates**: Client can send `notifications/roots/list_changed`
|
||||||
|
- Server requests updated roots and replaces allowed directories again
|
||||||
|
|
||||||
|
4. **Fallback Behavior** (if client doesn't support roots)
|
||||||
|
- Server continues using command-line directories only
|
||||||
|
- No dynamic updates possible
|
||||||
|
|
||||||
|
5. **Access Control**
|
||||||
|
- All filesystem operations are restricted to allowed directories
|
||||||
|
- Use `list_allowed_directories` tool to see current directories
|
||||||
|
- Server requires at least ONE allowed directory to operate
|
||||||
|
|
||||||
|
**Note**: The server will only allow operations within directories specified either via `args` or via Roots.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
### Tools
|
||||||
|
|
||||||
|
- **read_text_file**
|
||||||
|
- Read complete contents of a file as text
|
||||||
|
- Inputs:
|
||||||
|
- `path` (string)
|
||||||
|
- `head` (number, optional): First N lines
|
||||||
|
- `tail` (number, optional): Last N lines
|
||||||
|
- Always treats the file as UTF-8 text regardless of extension
|
||||||
|
- Cannot specify both `head` and `tail` simultaneously
|
||||||
|
|
||||||
|
- **read_media_file**
|
||||||
|
- Read an image or audio file
|
||||||
|
- Inputs:
|
||||||
|
- `path` (string)
|
||||||
|
- Streams the file and returns base64 data with the corresponding MIME type
|
||||||
|
|
||||||
|
- **read_multiple_files**
|
||||||
|
- Read multiple files simultaneously
|
||||||
|
- Input: `paths` (string[])
|
||||||
|
- Failed reads won't stop the entire operation
|
||||||
|
|
||||||
|
- **write_file**
|
||||||
|
- Create new file or overwrite existing (exercise caution with this)
|
||||||
|
- Inputs:
|
||||||
|
- `path` (string): File location
|
||||||
|
- `content` (string): File content
|
||||||
|
|
||||||
|
- **edit_file**
|
||||||
|
- Make selective edits using advanced pattern matching and formatting
|
||||||
|
- Features:
|
||||||
|
- Line-based and multi-line content matching
|
||||||
|
- Whitespace normalization with indentation preservation
|
||||||
|
- Multiple simultaneous edits with correct positioning
|
||||||
|
- Indentation style detection and preservation
|
||||||
|
- Git-style diff output with context
|
||||||
|
- Preview changes with dry run mode
|
||||||
|
- Inputs:
|
||||||
|
- `path` (string): File to edit
|
||||||
|
- `edits` (array): List of edit operations
|
||||||
|
- `oldText` (string): Text to search for (can be substring)
|
||||||
|
- `newText` (string): Text to replace with
|
||||||
|
- `dryRun` (boolean): Preview changes without applying (default: false)
|
||||||
|
- Returns detailed diff and match information for dry runs, otherwise applies changes
|
||||||
|
- Best Practice: Always use dryRun first to preview changes before applying them
|
||||||
|
|
||||||
|
- **create_directory**
|
||||||
|
- Create new directory or ensure it exists
|
||||||
|
- Input: `path` (string)
|
||||||
|
- Creates parent directories if needed
|
||||||
|
- Succeeds silently if directory exists
|
||||||
|
|
||||||
|
- **list_directory**
|
||||||
|
- List directory contents with [FILE] or [DIR] prefixes
|
||||||
|
- Input: `path` (string)
|
||||||
|
|
||||||
|
- **list_directory_with_sizes**
|
||||||
|
- List directory contents with [FILE] or [DIR] prefixes, including file sizes
|
||||||
|
- Inputs:
|
||||||
|
- `path` (string): Directory path to list
|
||||||
|
- `sortBy` (string, optional): Sort entries by "name" or "size" (default: "name")
|
||||||
|
- Returns detailed listing with file sizes and summary statistics
|
||||||
|
- Shows total files, directories, and combined size
|
||||||
|
|
||||||
|
- **move_file**
|
||||||
|
- Move or rename files and directories
|
||||||
|
- Inputs:
|
||||||
|
- `source` (string)
|
||||||
|
- `destination` (string)
|
||||||
|
- Fails if destination exists
|
||||||
|
|
||||||
|
- **search_files**
|
||||||
|
- Recursively search for files/directories that match or do not match patterns
|
||||||
|
- Inputs:
|
||||||
|
- `path` (string): Starting directory
|
||||||
|
- `pattern` (string): Search pattern
|
||||||
|
- `excludePatterns` (string[]): Exclude any patterns.
|
||||||
|
- Glob-style pattern matching
|
||||||
|
- Returns full paths to matches
|
||||||
|
|
||||||
|
- **directory_tree**
|
||||||
|
- Get recursive JSON tree structure of directory contents
|
||||||
|
- Inputs:
|
||||||
|
- `path` (string): Starting directory
|
||||||
|
- `excludePatterns` (string[]): Exclude any patterns. Glob formats are supported.
|
||||||
|
- Returns:
|
||||||
|
- JSON array where each entry contains:
|
||||||
|
- `name` (string): File/directory name
|
||||||
|
- `type` ('file'|'directory'): Entry type
|
||||||
|
- `children` (array): Present only for directories
|
||||||
|
- Empty array for empty directories
|
||||||
|
- Omitted for files
|
||||||
|
- Output is formatted with 2-space indentation for readability
|
||||||
|
|
||||||
|
- **get_file_info**
|
||||||
|
- Get detailed file/directory metadata
|
||||||
|
- Input: `path` (string)
|
||||||
|
- Returns:
|
||||||
|
- Size
|
||||||
|
- Creation time
|
||||||
|
- Modified time
|
||||||
|
- Access time
|
||||||
|
- Type (file/directory)
|
||||||
|
- Permissions
|
||||||
|
|
||||||
|
- **list_allowed_directories**
|
||||||
|
- List all directories the server is allowed to access
|
||||||
|
- No input required
|
||||||
|
- Returns:
|
||||||
|
- Directories that this server can read/write from
|
||||||
|
|
||||||
|
### Tool annotations (MCP hints)
|
||||||
|
|
||||||
|
This server sets [MCP ToolAnnotations](https://modelcontextprotocol.io/specification/2025-03-26/server/tools#toolannotations)
|
||||||
|
on each tool so clients can:
|
||||||
|
|
||||||
|
- Distinguish **read‑only** tools from write‑capable tools.
|
||||||
|
- Understand which write operations are **idempotent** (safe to retry with the same arguments).
|
||||||
|
- Highlight operations that may be **destructive** (overwriting or heavily mutating data).
|
||||||
|
|
||||||
|
The mapping for filesystem tools is:
|
||||||
|
|
||||||
|
| Tool | readOnlyHint | idempotentHint | destructiveHint | Notes |
|
||||||
|
|-----------------------------|--------------|----------------|-----------------|--------------------------------------------------|
|
||||||
|
| `read_text_file` | `true` | – | – | Pure read |
|
||||||
|
| `read_media_file` | `true` | – | – | Pure read |
|
||||||
|
| `read_multiple_files` | `true` | – | – | Pure read |
|
||||||
|
| `list_directory` | `true` | – | – | Pure read |
|
||||||
|
| `list_directory_with_sizes` | `true` | – | – | Pure read |
|
||||||
|
| `directory_tree` | `true` | – | – | Pure read |
|
||||||
|
| `search_files` | `true` | – | – | Pure read |
|
||||||
|
| `get_file_info` | `true` | – | – | Pure read |
|
||||||
|
| `list_allowed_directories` | `true` | – | – | Pure read |
|
||||||
|
| `create_directory` | `false` | `true` | `false` | Re‑creating the same dir is a no‑op |
|
||||||
|
| `write_file` | `false` | `true` | `true` | Overwrites existing files |
|
||||||
|
| `edit_file` | `false` | `false` | `true` | Re‑applying edits can fail or double‑apply |
|
||||||
|
| `move_file` | `false` | `false` | `false` | Move/rename only; repeat usually errors |
|
||||||
|
|
||||||
|
> Note: `idempotentHint` and `destructiveHint` are meaningful only when `readOnlyHint` is `false`, as defined by the MCP spec.
|
||||||
|
|
||||||
|
## Usage with Claude Desktop
|
||||||
|
Add this to your `claude_desktop_config.json`:
|
||||||
|
|
||||||
|
Note: you can provide sandboxed directories to the server by mounting them to `/projects`. Adding the `ro` flag will make the directory readonly by the server.
|
||||||
|
|
||||||
|
### Docker
|
||||||
|
Note: all directories must be mounted to `/projects` by default.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": [
|
||||||
|
"run",
|
||||||
|
"-i",
|
||||||
|
"--rm",
|
||||||
|
"--mount", "type=bind,src=/Users/username/Desktop,dst=/projects/Desktop",
|
||||||
|
"--mount", "type=bind,src=/path/to/other/allowed/dir,dst=/projects/other/allowed/dir,ro",
|
||||||
|
"--mount", "type=bind,src=/path/to/file.txt,dst=/projects/path/to/file.txt",
|
||||||
|
"mcp/filesystem",
|
||||||
|
"/projects"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### NPX
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"@modelcontextprotocol/server-filesystem",
|
||||||
|
"/Users/username/Desktop",
|
||||||
|
"/path/to/other/allowed/dir"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage with VS Code
|
||||||
|
|
||||||
|
For quick installation, click the installation buttons below...
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=filesystem&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40modelcontextprotocol%2Fserver-filesystem%22%2C%22%24%7BworkspaceFolder%7D%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=filesystem&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40modelcontextprotocol%2Fserver-filesystem%22%2C%22%24%7BworkspaceFolder%7D%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=filesystem&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22--mount%22%2C%22type%3Dbind%2Csrc%3D%24%7BworkspaceFolder%7D%2Cdst%3D%2Fprojects%2Fworkspace%22%2C%22mcp%2Ffilesystem%22%2C%22%2Fprojects%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=filesystem&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22--rm%22%2C%22--mount%22%2C%22type%3Dbind%2Csrc%3D%24%7BworkspaceFolder%7D%2Cdst%3D%2Fprojects%2Fworkspace%22%2C%22mcp%2Ffilesystem%22%2C%22%2Fprojects%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
For manual installation, you can configure the MCP server using one of these methods:
|
||||||
|
|
||||||
|
**Method 1: User Configuration (Recommended)**
|
||||||
|
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
|
||||||
|
|
||||||
|
**Method 2: Workspace Configuration**
|
||||||
|
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
|
||||||
|
|
||||||
|
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/customization/mcp-servers).
|
||||||
|
|
||||||
|
You can provide sandboxed directories to the server by mounting them to `/projects`. Adding the `ro` flag will make the directory readonly by the server.
|
||||||
|
|
||||||
|
### Docker
|
||||||
|
Note: all directories must be mounted to `/projects` by default.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": [
|
||||||
|
"run",
|
||||||
|
"-i",
|
||||||
|
"--rm",
|
||||||
|
"--mount", "type=bind,src=${workspaceFolder},dst=/projects/workspace",
|
||||||
|
"mcp/filesystem",
|
||||||
|
"/projects"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### NPX
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"@modelcontextprotocol/server-filesystem",
|
||||||
|
"${workspaceFolder}"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Build
|
||||||
|
|
||||||
|
Docker build:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -t mcp/filesystem -f src/filesystem/Dockerfile .
|
||||||
|
```
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
|
||||||
147
mcpServer/modules/filesystem/__tests__/directory-tree.test.ts
Normal file
147
mcpServer/modules/filesystem/__tests__/directory-tree.test.ts
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import * as fs from 'fs/promises';
|
||||||
|
import * as path from 'path';
|
||||||
|
import * as os from 'os';
|
||||||
|
|
||||||
|
// We need to test the buildTree function, but it's defined inside the request handler
|
||||||
|
// So we'll extract the core logic into a testable function
|
||||||
|
import { minimatch } from 'minimatch';
|
||||||
|
|
||||||
|
interface TreeEntry {
|
||||||
|
name: string;
|
||||||
|
type: 'file' | 'directory';
|
||||||
|
children?: TreeEntry[];
|
||||||
|
}
|
||||||
|
|
||||||
|
async function buildTreeForTesting(currentPath: string, rootPath: string, excludePatterns: string[] = []): Promise<TreeEntry[]> {
|
||||||
|
const entries = await fs.readdir(currentPath, {withFileTypes: true});
|
||||||
|
const result: TreeEntry[] = [];
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
const relativePath = path.relative(rootPath, path.join(currentPath, entry.name));
|
||||||
|
const shouldExclude = excludePatterns.some(pattern => {
|
||||||
|
if (pattern.includes('*')) {
|
||||||
|
return minimatch(relativePath, pattern, {dot: true});
|
||||||
|
}
|
||||||
|
// For files: match exact name or as part of path
|
||||||
|
// For directories: match as directory path
|
||||||
|
return minimatch(relativePath, pattern, {dot: true}) ||
|
||||||
|
minimatch(relativePath, `**/${pattern}`, {dot: true}) ||
|
||||||
|
minimatch(relativePath, `**/${pattern}/**`, {dot: true});
|
||||||
|
});
|
||||||
|
if (shouldExclude)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
const entryData: TreeEntry = {
|
||||||
|
name: entry.name,
|
||||||
|
type: entry.isDirectory() ? 'directory' : 'file'
|
||||||
|
};
|
||||||
|
|
||||||
|
if (entry.isDirectory()) {
|
||||||
|
const subPath = path.join(currentPath, entry.name);
|
||||||
|
entryData.children = await buildTreeForTesting(subPath, rootPath, excludePatterns);
|
||||||
|
}
|
||||||
|
|
||||||
|
result.push(entryData);
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('buildTree exclude patterns', () => {
|
||||||
|
let testDir: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'filesystem-test-'));
|
||||||
|
|
||||||
|
// Create test directory structure
|
||||||
|
await fs.mkdir(path.join(testDir, 'src'));
|
||||||
|
await fs.mkdir(path.join(testDir, 'node_modules'));
|
||||||
|
await fs.mkdir(path.join(testDir, '.git'));
|
||||||
|
await fs.mkdir(path.join(testDir, 'nested', 'node_modules'), { recursive: true });
|
||||||
|
|
||||||
|
// Create test files
|
||||||
|
await fs.writeFile(path.join(testDir, '.env'), 'SECRET=value');
|
||||||
|
await fs.writeFile(path.join(testDir, '.env.local'), 'LOCAL_SECRET=value');
|
||||||
|
await fs.writeFile(path.join(testDir, 'src', 'index.js'), 'console.log("hello");');
|
||||||
|
await fs.writeFile(path.join(testDir, 'package.json'), '{}');
|
||||||
|
await fs.writeFile(path.join(testDir, 'node_modules', 'module.js'), 'module.exports = {};');
|
||||||
|
await fs.writeFile(path.join(testDir, 'nested', 'node_modules', 'deep.js'), 'module.exports = {};');
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await fs.rm(testDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should exclude files matching simple patterns', async () => {
|
||||||
|
// Test the current implementation - this will fail until the bug is fixed
|
||||||
|
const tree = await buildTreeForTesting(testDir, testDir, ['.env']);
|
||||||
|
const fileNames = tree.map(entry => entry.name);
|
||||||
|
|
||||||
|
expect(fileNames).not.toContain('.env');
|
||||||
|
expect(fileNames).toContain('.env.local'); // Should not exclude this
|
||||||
|
expect(fileNames).toContain('src');
|
||||||
|
expect(fileNames).toContain('package.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should exclude directories matching simple patterns', async () => {
|
||||||
|
const tree = await buildTreeForTesting(testDir, testDir, ['node_modules']);
|
||||||
|
const dirNames = tree.map(entry => entry.name);
|
||||||
|
|
||||||
|
expect(dirNames).not.toContain('node_modules');
|
||||||
|
expect(dirNames).toContain('src');
|
||||||
|
expect(dirNames).toContain('.git');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should exclude nested directories with same pattern', async () => {
|
||||||
|
const tree = await buildTreeForTesting(testDir, testDir, ['node_modules']);
|
||||||
|
|
||||||
|
// Find the nested directory
|
||||||
|
const nestedDir = tree.find(entry => entry.name === 'nested');
|
||||||
|
expect(nestedDir).toBeDefined();
|
||||||
|
expect(nestedDir!.children).toBeDefined();
|
||||||
|
|
||||||
|
// The nested/node_modules should also be excluded
|
||||||
|
const nestedChildren = nestedDir!.children!.map(child => child.name);
|
||||||
|
expect(nestedChildren).not.toContain('node_modules');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle glob patterns correctly', async () => {
|
||||||
|
const tree = await buildTreeForTesting(testDir, testDir, ['*.env']);
|
||||||
|
const fileNames = tree.map(entry => entry.name);
|
||||||
|
|
||||||
|
expect(fileNames).not.toContain('.env');
|
||||||
|
expect(fileNames).toContain('.env.local'); // *.env should not match .env.local
|
||||||
|
expect(fileNames).toContain('src');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle dot files correctly', async () => {
|
||||||
|
const tree = await buildTreeForTesting(testDir, testDir, ['.git']);
|
||||||
|
const dirNames = tree.map(entry => entry.name);
|
||||||
|
|
||||||
|
expect(dirNames).not.toContain('.git');
|
||||||
|
expect(dirNames).toContain('.env'); // Should not exclude this
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should work with multiple exclude patterns', async () => {
|
||||||
|
const tree = await buildTreeForTesting(testDir, testDir, ['node_modules', '.env', '.git']);
|
||||||
|
const entryNames = tree.map(entry => entry.name);
|
||||||
|
|
||||||
|
expect(entryNames).not.toContain('node_modules');
|
||||||
|
expect(entryNames).not.toContain('.env');
|
||||||
|
expect(entryNames).not.toContain('.git');
|
||||||
|
expect(entryNames).toContain('src');
|
||||||
|
expect(entryNames).toContain('package.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty exclude patterns', async () => {
|
||||||
|
const tree = await buildTreeForTesting(testDir, testDir, []);
|
||||||
|
const entryNames = tree.map(entry => entry.name);
|
||||||
|
|
||||||
|
// All entries should be included
|
||||||
|
expect(entryNames).toContain('node_modules');
|
||||||
|
expect(entryNames).toContain('.env');
|
||||||
|
expect(entryNames).toContain('.git');
|
||||||
|
expect(entryNames).toContain('src');
|
||||||
|
});
|
||||||
|
});
|
||||||
725
mcpServer/modules/filesystem/__tests__/lib.test.ts
Normal file
725
mcpServer/modules/filesystem/__tests__/lib.test.ts
Normal file
@@ -0,0 +1,725 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
|
import fs from 'fs/promises';
|
||||||
|
import path from 'path';
|
||||||
|
import os from 'os';
|
||||||
|
import {
|
||||||
|
// Pure utility functions
|
||||||
|
formatSize,
|
||||||
|
normalizeLineEndings,
|
||||||
|
createUnifiedDiff,
|
||||||
|
// Security & validation functions
|
||||||
|
validatePath,
|
||||||
|
setAllowedDirectories,
|
||||||
|
// File operations
|
||||||
|
getFileStats,
|
||||||
|
readFileContent,
|
||||||
|
writeFileContent,
|
||||||
|
// Search & filtering functions
|
||||||
|
searchFilesWithValidation,
|
||||||
|
// File editing functions
|
||||||
|
applyFileEdits,
|
||||||
|
tailFile,
|
||||||
|
headFile
|
||||||
|
} from '../lib.js';
|
||||||
|
|
||||||
|
// Mock fs module
|
||||||
|
vi.mock('fs/promises');
|
||||||
|
const mockFs = fs as any;
|
||||||
|
|
||||||
|
describe('Lib Functions', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
// Set up allowed directories for tests
|
||||||
|
const allowedDirs = process.platform === 'win32' ? ['C:\\Users\\test', 'C:\\temp', 'C:\\allowed'] : ['/home/user', '/tmp', '/allowed'];
|
||||||
|
setAllowedDirectories(allowedDirs);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
// Clear allowed directories after tests
|
||||||
|
setAllowedDirectories([]);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Pure Utility Functions', () => {
|
||||||
|
describe('formatSize', () => {
|
||||||
|
it('formats bytes correctly', () => {
|
||||||
|
expect(formatSize(0)).toBe('0 B');
|
||||||
|
expect(formatSize(512)).toBe('512 B');
|
||||||
|
expect(formatSize(1024)).toBe('1.00 KB');
|
||||||
|
expect(formatSize(1536)).toBe('1.50 KB');
|
||||||
|
expect(formatSize(1048576)).toBe('1.00 MB');
|
||||||
|
expect(formatSize(1073741824)).toBe('1.00 GB');
|
||||||
|
expect(formatSize(1099511627776)).toBe('1.00 TB');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles edge cases', () => {
|
||||||
|
expect(formatSize(1023)).toBe('1023 B');
|
||||||
|
expect(formatSize(1025)).toBe('1.00 KB');
|
||||||
|
expect(formatSize(1048575)).toBe('1024.00 KB');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles very large numbers beyond TB', () => {
|
||||||
|
// The function only supports up to TB, so very large numbers will show as TB
|
||||||
|
expect(formatSize(1024 * 1024 * 1024 * 1024 * 1024)).toBe('1024.00 TB');
|
||||||
|
expect(formatSize(Number.MAX_SAFE_INTEGER)).toContain('TB');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles negative numbers', () => {
|
||||||
|
// Negative numbers will result in NaN for the log calculation
|
||||||
|
expect(formatSize(-1024)).toContain('NaN');
|
||||||
|
expect(formatSize(-0)).toBe('0 B');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles decimal numbers', () => {
|
||||||
|
expect(formatSize(1536.5)).toBe('1.50 KB');
|
||||||
|
expect(formatSize(1023.9)).toBe('1023.9 B');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles very small positive numbers', () => {
|
||||||
|
expect(formatSize(1)).toBe('1 B');
|
||||||
|
expect(formatSize(0.5)).toBe('0.5 B');
|
||||||
|
expect(formatSize(0.1)).toBe('0.1 B');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('normalizeLineEndings', () => {
|
||||||
|
it('converts CRLF to LF', () => {
|
||||||
|
expect(normalizeLineEndings('line1\r\nline2\r\nline3')).toBe('line1\nline2\nline3');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('leaves LF unchanged', () => {
|
||||||
|
expect(normalizeLineEndings('line1\nline2\nline3')).toBe('line1\nline2\nline3');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles mixed line endings', () => {
|
||||||
|
expect(normalizeLineEndings('line1\r\nline2\nline3\r\n')).toBe('line1\nline2\nline3\n');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles empty string', () => {
|
||||||
|
expect(normalizeLineEndings('')).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('createUnifiedDiff', () => {
|
||||||
|
it('creates diff for simple changes', () => {
|
||||||
|
const original = 'line1\nline2\nline3';
|
||||||
|
const modified = 'line1\nmodified line2\nline3';
|
||||||
|
const diff = createUnifiedDiff(original, modified, 'test.txt');
|
||||||
|
|
||||||
|
expect(diff).toContain('--- test.txt');
|
||||||
|
expect(diff).toContain('+++ test.txt');
|
||||||
|
expect(diff).toContain('-line2');
|
||||||
|
expect(diff).toContain('+modified line2');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles CRLF normalization', () => {
|
||||||
|
const original = 'line1\r\nline2\r\n';
|
||||||
|
const modified = 'line1\nmodified line2\n';
|
||||||
|
const diff = createUnifiedDiff(original, modified);
|
||||||
|
|
||||||
|
expect(diff).toContain('-line2');
|
||||||
|
expect(diff).toContain('+modified line2');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles identical content', () => {
|
||||||
|
const content = 'line1\nline2\nline3';
|
||||||
|
const diff = createUnifiedDiff(content, content);
|
||||||
|
|
||||||
|
// Should not contain any +/- lines for identical content (excluding header lines)
|
||||||
|
expect(diff.split('\n').filter((line: string) => line.startsWith('+++') || line.startsWith('---'))).toHaveLength(2);
|
||||||
|
expect(diff.split('\n').filter((line: string) => line.startsWith('+') && !line.startsWith('+++'))).toHaveLength(0);
|
||||||
|
expect(diff.split('\n').filter((line: string) => line.startsWith('-') && !line.startsWith('---'))).toHaveLength(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles empty content', () => {
|
||||||
|
const diff = createUnifiedDiff('', '');
|
||||||
|
expect(diff).toContain('--- file');
|
||||||
|
expect(diff).toContain('+++ file');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles default filename parameter', () => {
|
||||||
|
const diff = createUnifiedDiff('old', 'new');
|
||||||
|
expect(diff).toContain('--- file');
|
||||||
|
expect(diff).toContain('+++ file');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles custom filename', () => {
|
||||||
|
const diff = createUnifiedDiff('old', 'new', 'custom.txt');
|
||||||
|
expect(diff).toContain('--- custom.txt');
|
||||||
|
expect(diff).toContain('+++ custom.txt');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Security & Validation Functions', () => {
|
||||||
|
describe('validatePath', () => {
|
||||||
|
// Use Windows-compatible paths for testing
|
||||||
|
const allowedDirs = process.platform === 'win32' ? ['C:\\Users\\test', 'C:\\temp'] : ['/home/user', '/tmp'];
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
mockFs.realpath.mockImplementation(async (path: any) => path.toString());
|
||||||
|
});
|
||||||
|
|
||||||
|
it('validates allowed paths', async () => {
|
||||||
|
const testPath = process.platform === 'win32' ? 'C:\\Users\\test\\file.txt' : '/home/user/file.txt';
|
||||||
|
const result = await validatePath(testPath);
|
||||||
|
expect(result).toBe(testPath);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('rejects disallowed paths', async () => {
|
||||||
|
const testPath = process.platform === 'win32' ? 'C:\\Windows\\System32\\file.txt' : '/etc/passwd';
|
||||||
|
await expect(validatePath(testPath))
|
||||||
|
.rejects.toThrow('Access denied - path outside allowed directories');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles non-existent files by checking parent directory', async () => {
|
||||||
|
const newFilePath = process.platform === 'win32' ? 'C:\\Users\\test\\newfile.txt' : '/home/user/newfile.txt';
|
||||||
|
const parentPath = process.platform === 'win32' ? 'C:\\Users\\test' : '/home/user';
|
||||||
|
|
||||||
|
// Create an error with the ENOENT code that the implementation checks for
|
||||||
|
const enoentError = new Error('ENOENT') as NodeJS.ErrnoException;
|
||||||
|
enoentError.code = 'ENOENT';
|
||||||
|
|
||||||
|
mockFs.realpath
|
||||||
|
.mockRejectedValueOnce(enoentError)
|
||||||
|
.mockResolvedValueOnce(parentPath);
|
||||||
|
|
||||||
|
const result = await validatePath(newFilePath);
|
||||||
|
expect(result).toBe(path.resolve(newFilePath));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('rejects when parent directory does not exist', async () => {
|
||||||
|
const newFilePath = process.platform === 'win32' ? 'C:\\Users\\test\\nonexistent\\newfile.txt' : '/home/user/nonexistent/newfile.txt';
|
||||||
|
|
||||||
|
// Create errors with the ENOENT code
|
||||||
|
const enoentError1 = new Error('ENOENT') as NodeJS.ErrnoException;
|
||||||
|
enoentError1.code = 'ENOENT';
|
||||||
|
const enoentError2 = new Error('ENOENT') as NodeJS.ErrnoException;
|
||||||
|
enoentError2.code = 'ENOENT';
|
||||||
|
|
||||||
|
mockFs.realpath
|
||||||
|
.mockRejectedValueOnce(enoentError1)
|
||||||
|
.mockRejectedValueOnce(enoentError2);
|
||||||
|
|
||||||
|
await expect(validatePath(newFilePath))
|
||||||
|
.rejects.toThrow('Parent directory does not exist');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('resolves relative paths against allowed directories instead of process.cwd()', async () => {
|
||||||
|
const relativePath = 'test-file.txt';
|
||||||
|
const originalCwd = process.cwd;
|
||||||
|
|
||||||
|
// Mock process.cwd to return a directory outside allowed directories
|
||||||
|
const disallowedCwd = process.platform === 'win32' ? 'C:\\Windows\\System32' : '/root';
|
||||||
|
(process as any).cwd = vi.fn(() => disallowedCwd);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await validatePath(relativePath);
|
||||||
|
|
||||||
|
// Result should be resolved against first allowed directory, not process.cwd()
|
||||||
|
const expectedPath = process.platform === 'win32'
|
||||||
|
? path.resolve('C:\\Users\\test', relativePath)
|
||||||
|
: path.resolve('/home/user', relativePath);
|
||||||
|
|
||||||
|
expect(result).toBe(expectedPath);
|
||||||
|
expect(result).not.toContain(disallowedCwd);
|
||||||
|
} finally {
|
||||||
|
// Restore original process.cwd
|
||||||
|
process.cwd = originalCwd;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('File Operations', () => {
|
||||||
|
describe('getFileStats', () => {
|
||||||
|
it('returns file statistics', async () => {
|
||||||
|
const mockStats = {
|
||||||
|
size: 1024,
|
||||||
|
birthtime: new Date('2023-01-01'),
|
||||||
|
mtime: new Date('2023-01-02'),
|
||||||
|
atime: new Date('2023-01-03'),
|
||||||
|
isDirectory: () => false,
|
||||||
|
isFile: () => true,
|
||||||
|
mode: 0o644
|
||||||
|
};
|
||||||
|
|
||||||
|
mockFs.stat.mockResolvedValueOnce(mockStats as any);
|
||||||
|
|
||||||
|
const result = await getFileStats('/test/file.txt');
|
||||||
|
|
||||||
|
expect(result).toEqual({
|
||||||
|
size: 1024,
|
||||||
|
created: new Date('2023-01-01'),
|
||||||
|
modified: new Date('2023-01-02'),
|
||||||
|
accessed: new Date('2023-01-03'),
|
||||||
|
isDirectory: false,
|
||||||
|
isFile: true,
|
||||||
|
permissions: '644'
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles directory statistics', async () => {
|
||||||
|
const mockStats = {
|
||||||
|
size: 4096,
|
||||||
|
birthtime: new Date('2023-01-01'),
|
||||||
|
mtime: new Date('2023-01-02'),
|
||||||
|
atime: new Date('2023-01-03'),
|
||||||
|
isDirectory: () => true,
|
||||||
|
isFile: () => false,
|
||||||
|
mode: 0o755
|
||||||
|
};
|
||||||
|
|
||||||
|
mockFs.stat.mockResolvedValueOnce(mockStats as any);
|
||||||
|
|
||||||
|
const result = await getFileStats('/test/dir');
|
||||||
|
|
||||||
|
expect(result.isDirectory).toBe(true);
|
||||||
|
expect(result.isFile).toBe(false);
|
||||||
|
expect(result.permissions).toBe('755');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('readFileContent', () => {
|
||||||
|
it('reads file with default encoding', async () => {
|
||||||
|
mockFs.readFile.mockResolvedValueOnce('file content');
|
||||||
|
|
||||||
|
const result = await readFileContent('/test/file.txt');
|
||||||
|
|
||||||
|
expect(result).toBe('file content');
|
||||||
|
expect(mockFs.readFile).toHaveBeenCalledWith('/test/file.txt', 'utf-8');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('reads file with custom encoding', async () => {
|
||||||
|
mockFs.readFile.mockResolvedValueOnce('file content');
|
||||||
|
|
||||||
|
const result = await readFileContent('/test/file.txt', 'ascii');
|
||||||
|
|
||||||
|
expect(result).toBe('file content');
|
||||||
|
expect(mockFs.readFile).toHaveBeenCalledWith('/test/file.txt', 'ascii');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('writeFileContent', () => {
|
||||||
|
it('writes file content', async () => {
|
||||||
|
mockFs.writeFile.mockResolvedValueOnce(undefined);
|
||||||
|
|
||||||
|
await writeFileContent('/test/file.txt', 'new content');
|
||||||
|
|
||||||
|
expect(mockFs.writeFile).toHaveBeenCalledWith('/test/file.txt', 'new content', { encoding: "utf-8", flag: 'wx' });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Search & Filtering Functions', () => {
|
||||||
|
describe('searchFilesWithValidation', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
mockFs.realpath.mockImplementation(async (path: any) => path.toString());
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
|
it('excludes files matching exclude patterns', async () => {
|
||||||
|
const mockEntries = [
|
||||||
|
{ name: 'test.txt', isDirectory: () => false },
|
||||||
|
{ name: 'test.log', isDirectory: () => false },
|
||||||
|
{ name: 'node_modules', isDirectory: () => true }
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.readdir.mockResolvedValueOnce(mockEntries as any);
|
||||||
|
|
||||||
|
const testDir = process.platform === 'win32' ? 'C:\\allowed\\dir' : '/allowed/dir';
|
||||||
|
const allowedDirs = process.platform === 'win32' ? ['C:\\allowed'] : ['/allowed'];
|
||||||
|
|
||||||
|
// Mock realpath to return the same path for validation to pass
|
||||||
|
mockFs.realpath.mockImplementation(async (inputPath: any) => {
|
||||||
|
const pathStr = inputPath.toString();
|
||||||
|
// Return the path as-is for validation
|
||||||
|
return pathStr;
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await searchFilesWithValidation(
|
||||||
|
testDir,
|
||||||
|
'*test*',
|
||||||
|
allowedDirs,
|
||||||
|
{ excludePatterns: ['*.log', 'node_modules'] }
|
||||||
|
);
|
||||||
|
|
||||||
|
const expectedResult = process.platform === 'win32' ? 'C:\\allowed\\dir\\test.txt' : '/allowed/dir/test.txt';
|
||||||
|
expect(result).toEqual([expectedResult]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles validation errors during search', async () => {
|
||||||
|
const mockEntries = [
|
||||||
|
{ name: 'test.txt', isDirectory: () => false },
|
||||||
|
{ name: 'invalid_file.txt', isDirectory: () => false }
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.readdir.mockResolvedValueOnce(mockEntries as any);
|
||||||
|
|
||||||
|
// Mock validatePath to throw error for invalid_file.txt
|
||||||
|
mockFs.realpath.mockImplementation(async (path: any) => {
|
||||||
|
if (path.toString().includes('invalid_file.txt')) {
|
||||||
|
throw new Error('Access denied');
|
||||||
|
}
|
||||||
|
return path.toString();
|
||||||
|
});
|
||||||
|
|
||||||
|
const testDir = process.platform === 'win32' ? 'C:\\allowed\\dir' : '/allowed/dir';
|
||||||
|
const allowedDirs = process.platform === 'win32' ? ['C:\\allowed'] : ['/allowed'];
|
||||||
|
|
||||||
|
const result = await searchFilesWithValidation(
|
||||||
|
testDir,
|
||||||
|
'*test*',
|
||||||
|
allowedDirs,
|
||||||
|
{}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Should only return the valid file, skipping the invalid one
|
||||||
|
const expectedResult = process.platform === 'win32' ? 'C:\\allowed\\dir\\test.txt' : '/allowed/dir/test.txt';
|
||||||
|
expect(result).toEqual([expectedResult]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles complex exclude patterns with wildcards', async () => {
|
||||||
|
const mockEntries = [
|
||||||
|
{ name: 'test.txt', isDirectory: () => false },
|
||||||
|
{ name: 'test.backup', isDirectory: () => false },
|
||||||
|
{ name: 'important_test.js', isDirectory: () => false }
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.readdir.mockResolvedValueOnce(mockEntries as any);
|
||||||
|
|
||||||
|
const testDir = process.platform === 'win32' ? 'C:\\allowed\\dir' : '/allowed/dir';
|
||||||
|
const allowedDirs = process.platform === 'win32' ? ['C:\\allowed'] : ['/allowed'];
|
||||||
|
|
||||||
|
const result = await searchFilesWithValidation(
|
||||||
|
testDir,
|
||||||
|
'*test*',
|
||||||
|
allowedDirs,
|
||||||
|
{ excludePatterns: ['*.backup'] }
|
||||||
|
);
|
||||||
|
|
||||||
|
const expectedResults = process.platform === 'win32' ? [
|
||||||
|
'C:\\allowed\\dir\\test.txt',
|
||||||
|
'C:\\allowed\\dir\\important_test.js'
|
||||||
|
] : [
|
||||||
|
'/allowed/dir/test.txt',
|
||||||
|
'/allowed/dir/important_test.js'
|
||||||
|
];
|
||||||
|
expect(result).toEqual(expectedResults);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('File Editing Functions', () => {
|
||||||
|
describe('applyFileEdits', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
mockFs.readFile.mockResolvedValue('line1\nline2\nline3\n');
|
||||||
|
mockFs.writeFile.mockResolvedValue(undefined);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('applies simple text replacement', async () => {
|
||||||
|
const edits = [
|
||||||
|
{ oldText: 'line2', newText: 'modified line2' }
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.rename.mockResolvedValueOnce(undefined);
|
||||||
|
|
||||||
|
const result = await applyFileEdits('/test/file.txt', edits, false);
|
||||||
|
|
||||||
|
expect(result).toContain('modified line2');
|
||||||
|
// Should write to temporary file then rename
|
||||||
|
expect(mockFs.writeFile).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
'line1\nmodified line2\nline3\n',
|
||||||
|
'utf-8'
|
||||||
|
);
|
||||||
|
expect(mockFs.rename).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
'/test/file.txt'
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles dry run mode', async () => {
|
||||||
|
const edits = [
|
||||||
|
{ oldText: 'line2', newText: 'modified line2' }
|
||||||
|
];
|
||||||
|
|
||||||
|
const result = await applyFileEdits('/test/file.txt', edits, true);
|
||||||
|
|
||||||
|
expect(result).toContain('modified line2');
|
||||||
|
expect(mockFs.writeFile).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('applies multiple edits sequentially', async () => {
|
||||||
|
const edits = [
|
||||||
|
{ oldText: 'line1', newText: 'first line' },
|
||||||
|
{ oldText: 'line3', newText: 'third line' }
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.rename.mockResolvedValueOnce(undefined);
|
||||||
|
|
||||||
|
await applyFileEdits('/test/file.txt', edits, false);
|
||||||
|
|
||||||
|
expect(mockFs.writeFile).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
'first line\nline2\nthird line\n',
|
||||||
|
'utf-8'
|
||||||
|
);
|
||||||
|
expect(mockFs.rename).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
'/test/file.txt'
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles whitespace-flexible matching', async () => {
|
||||||
|
mockFs.readFile.mockResolvedValue(' line1\n line2\n line3\n');
|
||||||
|
|
||||||
|
const edits = [
|
||||||
|
{ oldText: 'line2', newText: 'modified line2' }
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.rename.mockResolvedValueOnce(undefined);
|
||||||
|
|
||||||
|
await applyFileEdits('/test/file.txt', edits, false);
|
||||||
|
|
||||||
|
expect(mockFs.writeFile).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
' line1\n modified line2\n line3\n',
|
||||||
|
'utf-8'
|
||||||
|
);
|
||||||
|
expect(mockFs.rename).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
'/test/file.txt'
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('throws error for non-matching edits', async () => {
|
||||||
|
const edits = [
|
||||||
|
{ oldText: 'nonexistent line', newText: 'replacement' }
|
||||||
|
];
|
||||||
|
|
||||||
|
await expect(applyFileEdits('/test/file.txt', edits, false))
|
||||||
|
.rejects.toThrow('Could not find exact match for edit');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles complex multi-line edits with indentation', async () => {
|
||||||
|
mockFs.readFile.mockResolvedValue('function test() {\n console.log("hello");\n return true;\n}');
|
||||||
|
|
||||||
|
const edits = [
|
||||||
|
{
|
||||||
|
oldText: ' console.log("hello");\n return true;',
|
||||||
|
newText: ' console.log("world");\n console.log("test");\n return false;'
|
||||||
|
}
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.rename.mockResolvedValueOnce(undefined);
|
||||||
|
|
||||||
|
await applyFileEdits('/test/file.js', edits, false);
|
||||||
|
|
||||||
|
expect(mockFs.writeFile).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
|
||||||
|
'function test() {\n console.log("world");\n console.log("test");\n return false;\n}',
|
||||||
|
'utf-8'
|
||||||
|
);
|
||||||
|
expect(mockFs.rename).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
|
||||||
|
'/test/file.js'
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles edits with different indentation patterns', async () => {
|
||||||
|
mockFs.readFile.mockResolvedValue(' if (condition) {\n doSomething();\n }');
|
||||||
|
|
||||||
|
const edits = [
|
||||||
|
{
|
||||||
|
oldText: 'doSomething();',
|
||||||
|
newText: 'doSomethingElse();\n doAnotherThing();'
|
||||||
|
}
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.rename.mockResolvedValueOnce(undefined);
|
||||||
|
|
||||||
|
await applyFileEdits('/test/file.js', edits, false);
|
||||||
|
|
||||||
|
expect(mockFs.writeFile).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
|
||||||
|
' if (condition) {\n doSomethingElse();\n doAnotherThing();\n }',
|
||||||
|
'utf-8'
|
||||||
|
);
|
||||||
|
expect(mockFs.rename).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.js\.[a-f0-9]+\.tmp$/),
|
||||||
|
'/test/file.js'
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles CRLF line endings in file content', async () => {
|
||||||
|
mockFs.readFile.mockResolvedValue('line1\r\nline2\r\nline3\r\n');
|
||||||
|
|
||||||
|
const edits = [
|
||||||
|
{ oldText: 'line2', newText: 'modified line2' }
|
||||||
|
];
|
||||||
|
|
||||||
|
mockFs.rename.mockResolvedValueOnce(undefined);
|
||||||
|
|
||||||
|
await applyFileEdits('/test/file.txt', edits, false);
|
||||||
|
|
||||||
|
expect(mockFs.writeFile).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
'line1\nmodified line2\nline3\n',
|
||||||
|
'utf-8'
|
||||||
|
);
|
||||||
|
expect(mockFs.rename).toHaveBeenCalledWith(
|
||||||
|
expect.stringMatching(/\/test\/file\.txt\.[a-f0-9]+\.tmp$/),
|
||||||
|
'/test/file.txt'
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('tailFile', () => {
|
||||||
|
it('handles empty files', async () => {
|
||||||
|
mockFs.stat.mockResolvedValue({ size: 0 } as any);
|
||||||
|
|
||||||
|
const result = await tailFile('/test/empty.txt', 5);
|
||||||
|
|
||||||
|
expect(result).toBe('');
|
||||||
|
expect(mockFs.open).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('calls stat to check file size', async () => {
|
||||||
|
mockFs.stat.mockResolvedValue({ size: 100 } as any);
|
||||||
|
|
||||||
|
// Mock file handle with proper typing
|
||||||
|
const mockFileHandle = {
|
||||||
|
read: vi.fn(),
|
||||||
|
close: vi.fn()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
mockFileHandle.read.mockResolvedValue({ bytesRead: 0 });
|
||||||
|
mockFileHandle.close.mockResolvedValue(undefined);
|
||||||
|
|
||||||
|
mockFs.open.mockResolvedValue(mockFileHandle);
|
||||||
|
|
||||||
|
await tailFile('/test/file.txt', 2);
|
||||||
|
|
||||||
|
expect(mockFs.stat).toHaveBeenCalledWith('/test/file.txt');
|
||||||
|
expect(mockFs.open).toHaveBeenCalledWith('/test/file.txt', 'r');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles files with content and returns last lines', async () => {
|
||||||
|
mockFs.stat.mockResolvedValue({ size: 50 } as any);
|
||||||
|
|
||||||
|
const mockFileHandle = {
|
||||||
|
read: vi.fn(),
|
||||||
|
close: vi.fn()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
// Simulate reading file content in chunks
|
||||||
|
mockFileHandle.read
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 20, buffer: Buffer.from('line3\nline4\nline5\n') })
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 0 });
|
||||||
|
mockFileHandle.close.mockResolvedValue(undefined);
|
||||||
|
|
||||||
|
mockFs.open.mockResolvedValue(mockFileHandle);
|
||||||
|
|
||||||
|
const result = await tailFile('/test/file.txt', 2);
|
||||||
|
|
||||||
|
expect(mockFileHandle.close).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles read errors gracefully', async () => {
|
||||||
|
mockFs.stat.mockResolvedValue({ size: 100 } as any);
|
||||||
|
|
||||||
|
const mockFileHandle = {
|
||||||
|
read: vi.fn(),
|
||||||
|
close: vi.fn()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
mockFileHandle.read.mockResolvedValue({ bytesRead: 0 });
|
||||||
|
mockFileHandle.close.mockResolvedValue(undefined);
|
||||||
|
|
||||||
|
mockFs.open.mockResolvedValue(mockFileHandle);
|
||||||
|
|
||||||
|
await tailFile('/test/file.txt', 5);
|
||||||
|
|
||||||
|
expect(mockFileHandle.close).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('headFile', () => {
|
||||||
|
it('opens file for reading', async () => {
|
||||||
|
// Mock file handle with proper typing
|
||||||
|
const mockFileHandle = {
|
||||||
|
read: vi.fn(),
|
||||||
|
close: vi.fn()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
mockFileHandle.read.mockResolvedValue({ bytesRead: 0 });
|
||||||
|
mockFileHandle.close.mockResolvedValue(undefined);
|
||||||
|
|
||||||
|
mockFs.open.mockResolvedValue(mockFileHandle);
|
||||||
|
|
||||||
|
await headFile('/test/file.txt', 2);
|
||||||
|
|
||||||
|
expect(mockFs.open).toHaveBeenCalledWith('/test/file.txt', 'r');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles files with content and returns first lines', async () => {
|
||||||
|
const mockFileHandle = {
|
||||||
|
read: vi.fn(),
|
||||||
|
close: vi.fn()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
// Simulate reading file content with newlines
|
||||||
|
mockFileHandle.read
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 20, buffer: Buffer.from('line1\nline2\nline3\n') })
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 0 });
|
||||||
|
mockFileHandle.close.mockResolvedValue(undefined);
|
||||||
|
|
||||||
|
mockFs.open.mockResolvedValue(mockFileHandle);
|
||||||
|
|
||||||
|
const result = await headFile('/test/file.txt', 2);
|
||||||
|
|
||||||
|
expect(mockFileHandle.close).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles files with leftover content', async () => {
|
||||||
|
const mockFileHandle = {
|
||||||
|
read: vi.fn(),
|
||||||
|
close: vi.fn()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
// Simulate reading file content without final newline
|
||||||
|
mockFileHandle.read
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 15, buffer: Buffer.from('line1\nline2\nend') })
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 0 });
|
||||||
|
mockFileHandle.close.mockResolvedValue(undefined);
|
||||||
|
|
||||||
|
mockFs.open.mockResolvedValue(mockFileHandle);
|
||||||
|
|
||||||
|
const result = await headFile('/test/file.txt', 5);
|
||||||
|
|
||||||
|
expect(mockFileHandle.close).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles reaching requested line count', async () => {
|
||||||
|
const mockFileHandle = {
|
||||||
|
read: vi.fn(),
|
||||||
|
close: vi.fn()
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
// Simulate reading exactly the requested number of lines
|
||||||
|
mockFileHandle.read
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 12, buffer: Buffer.from('line1\nline2\n') })
|
||||||
|
.mockResolvedValueOnce({ bytesRead: 0 });
|
||||||
|
mockFileHandle.close.mockResolvedValue(undefined);
|
||||||
|
|
||||||
|
mockFs.open.mockResolvedValue(mockFileHandle);
|
||||||
|
|
||||||
|
const result = await headFile('/test/file.txt', 2);
|
||||||
|
|
||||||
|
expect(mockFileHandle.close).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
371
mcpServer/modules/filesystem/__tests__/path-utils.test.ts
Normal file
371
mcpServer/modules/filesystem/__tests__/path-utils.test.ts
Normal file
@@ -0,0 +1,371 @@
|
|||||||
|
import { describe, it, expect, afterEach } from 'vitest';
|
||||||
|
import { normalizePath, expandHome, convertToWindowsPath } from '../path-utils.js';
|
||||||
|
|
||||||
|
describe('Path Utilities', () => {
|
||||||
|
describe('convertToWindowsPath', () => {
|
||||||
|
it('leaves Unix paths unchanged', () => {
|
||||||
|
expect(convertToWindowsPath('/usr/local/bin'))
|
||||||
|
.toBe('/usr/local/bin');
|
||||||
|
expect(convertToWindowsPath('/home/user/some path'))
|
||||||
|
.toBe('/home/user/some path');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('never converts WSL paths (they work correctly in WSL with Node.js fs)', () => {
|
||||||
|
// WSL paths should NEVER be converted, regardless of platform
|
||||||
|
// They are valid Linux paths that work with Node.js fs operations inside WSL
|
||||||
|
expect(convertToWindowsPath('/mnt/c/NS/MyKindleContent'))
|
||||||
|
.toBe('/mnt/c/NS/MyKindleContent');
|
||||||
|
expect(convertToWindowsPath('/mnt/d/Documents'))
|
||||||
|
.toBe('/mnt/d/Documents');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('converts Unix-style Windows paths only on Windows platform', () => {
|
||||||
|
// On Windows, /c/ style paths should be converted
|
||||||
|
if (process.platform === 'win32') {
|
||||||
|
expect(convertToWindowsPath('/c/NS/MyKindleContent'))
|
||||||
|
.toBe('C:\\NS\\MyKindleContent');
|
||||||
|
} else {
|
||||||
|
// On Linux, leave them unchanged
|
||||||
|
expect(convertToWindowsPath('/c/NS/MyKindleContent'))
|
||||||
|
.toBe('/c/NS/MyKindleContent');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('leaves Windows paths unchanged but ensures backslashes', () => {
|
||||||
|
expect(convertToWindowsPath('C:\\NS\\MyKindleContent'))
|
||||||
|
.toBe('C:\\NS\\MyKindleContent');
|
||||||
|
expect(convertToWindowsPath('C:/NS/MyKindleContent'))
|
||||||
|
.toBe('C:\\NS\\MyKindleContent');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles Windows paths with spaces', () => {
|
||||||
|
expect(convertToWindowsPath('C:\\Program Files\\Some App'))
|
||||||
|
.toBe('C:\\Program Files\\Some App');
|
||||||
|
expect(convertToWindowsPath('C:/Program Files/Some App'))
|
||||||
|
.toBe('C:\\Program Files\\Some App');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles drive letter paths based on platform', () => {
|
||||||
|
// WSL paths should never be converted
|
||||||
|
expect(convertToWindowsPath('/mnt/d/some/path'))
|
||||||
|
.toBe('/mnt/d/some/path');
|
||||||
|
|
||||||
|
if (process.platform === 'win32') {
|
||||||
|
// On Windows, Unix-style paths like /d/ should be converted
|
||||||
|
expect(convertToWindowsPath('/d/some/path'))
|
||||||
|
.toBe('D:\\some\\path');
|
||||||
|
} else {
|
||||||
|
// On Linux, /d/ is just a regular Unix path
|
||||||
|
expect(convertToWindowsPath('/d/some/path'))
|
||||||
|
.toBe('/d/some/path');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('normalizePath', () => {
|
||||||
|
it('preserves Unix paths', () => {
|
||||||
|
expect(normalizePath('/usr/local/bin'))
|
||||||
|
.toBe('/usr/local/bin');
|
||||||
|
expect(normalizePath('/home/user/some path'))
|
||||||
|
.toBe('/home/user/some path');
|
||||||
|
expect(normalizePath('"/usr/local/some app/"'))
|
||||||
|
.toBe('/usr/local/some app');
|
||||||
|
expect(normalizePath('/usr/local//bin/app///'))
|
||||||
|
.toBe('/usr/local/bin/app');
|
||||||
|
expect(normalizePath('/'))
|
||||||
|
.toBe('/');
|
||||||
|
expect(normalizePath('///'))
|
||||||
|
.toBe('/');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('removes surrounding quotes', () => {
|
||||||
|
expect(normalizePath('"C:\\NS\\My Kindle Content"'))
|
||||||
|
.toBe('C:\\NS\\My Kindle Content');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('normalizes backslashes', () => {
|
||||||
|
expect(normalizePath('C:\\\\NS\\\\MyKindleContent'))
|
||||||
|
.toBe('C:\\NS\\MyKindleContent');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('converts forward slashes to backslashes on Windows', () => {
|
||||||
|
expect(normalizePath('C:/NS/MyKindleContent'))
|
||||||
|
.toBe('C:\\NS\\MyKindleContent');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('always preserves WSL paths (they work correctly in WSL)', () => {
|
||||||
|
// WSL paths should ALWAYS be preserved, regardless of platform
|
||||||
|
// This is the fix for issue #2795
|
||||||
|
expect(normalizePath('/mnt/c/NS/MyKindleContent'))
|
||||||
|
.toBe('/mnt/c/NS/MyKindleContent');
|
||||||
|
expect(normalizePath('/mnt/d/Documents'))
|
||||||
|
.toBe('/mnt/d/Documents');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles Unix-style Windows paths', () => {
|
||||||
|
// On Windows, /c/ paths should be converted
|
||||||
|
if (process.platform === 'win32') {
|
||||||
|
expect(normalizePath('/c/NS/MyKindleContent'))
|
||||||
|
.toBe('C:\\NS\\MyKindleContent');
|
||||||
|
} else if (process.platform === 'linux') {
|
||||||
|
// On Linux, /c/ is just a regular Unix path
|
||||||
|
expect(normalizePath('/c/NS/MyKindleContent'))
|
||||||
|
.toBe('/c/NS/MyKindleContent');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles paths with spaces and mixed slashes', () => {
|
||||||
|
expect(normalizePath('C:/NS/My Kindle Content'))
|
||||||
|
.toBe('C:\\NS\\My Kindle Content');
|
||||||
|
// WSL paths should always be preserved
|
||||||
|
expect(normalizePath('/mnt/c/NS/My Kindle Content'))
|
||||||
|
.toBe('/mnt/c/NS/My Kindle Content');
|
||||||
|
expect(normalizePath('C:\\Program Files (x86)\\App Name'))
|
||||||
|
.toBe('C:\\Program Files (x86)\\App Name');
|
||||||
|
expect(normalizePath('"C:\\Program Files\\App Name"'))
|
||||||
|
.toBe('C:\\Program Files\\App Name');
|
||||||
|
expect(normalizePath(' C:\\Program Files\\App Name '))
|
||||||
|
.toBe('C:\\Program Files\\App Name');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('preserves spaces in all path formats', () => {
|
||||||
|
// WSL paths should always be preserved
|
||||||
|
expect(normalizePath('/mnt/c/Program Files/App Name'))
|
||||||
|
.toBe('/mnt/c/Program Files/App Name');
|
||||||
|
|
||||||
|
if (process.platform === 'win32') {
|
||||||
|
// On Windows, Unix-style paths like /c/ should be converted
|
||||||
|
expect(normalizePath('/c/Program Files/App Name'))
|
||||||
|
.toBe('C:\\Program Files\\App Name');
|
||||||
|
} else {
|
||||||
|
// On Linux, /c/ is just a regular Unix path
|
||||||
|
expect(normalizePath('/c/Program Files/App Name'))
|
||||||
|
.toBe('/c/Program Files/App Name');
|
||||||
|
}
|
||||||
|
expect(normalizePath('C:/Program Files/App Name'))
|
||||||
|
.toBe('C:\\Program Files\\App Name');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles special characters in paths', () => {
|
||||||
|
// Test ampersand in path
|
||||||
|
expect(normalizePath('C:\\NS\\Sub&Folder'))
|
||||||
|
.toBe('C:\\NS\\Sub&Folder');
|
||||||
|
expect(normalizePath('C:/NS/Sub&Folder'))
|
||||||
|
.toBe('C:\\NS\\Sub&Folder');
|
||||||
|
// WSL paths should always be preserved
|
||||||
|
expect(normalizePath('/mnt/c/NS/Sub&Folder'))
|
||||||
|
.toBe('/mnt/c/NS/Sub&Folder');
|
||||||
|
|
||||||
|
// Test tilde in path (short names in Windows)
|
||||||
|
expect(normalizePath('C:\\NS\\MYKIND~1'))
|
||||||
|
.toBe('C:\\NS\\MYKIND~1');
|
||||||
|
expect(normalizePath('/Users/NEMANS~1/FOLDER~2/SUBFO~1/Public/P12PST~1'))
|
||||||
|
.toBe('/Users/NEMANS~1/FOLDER~2/SUBFO~1/Public/P12PST~1');
|
||||||
|
|
||||||
|
// Test other special characters
|
||||||
|
expect(normalizePath('C:\\Path with #hash'))
|
||||||
|
.toBe('C:\\Path with #hash');
|
||||||
|
expect(normalizePath('C:\\Path with (parentheses)'))
|
||||||
|
.toBe('C:\\Path with (parentheses)');
|
||||||
|
expect(normalizePath('C:\\Path with [brackets]'))
|
||||||
|
.toBe('C:\\Path with [brackets]');
|
||||||
|
expect(normalizePath('C:\\Path with @at+plus$dollar%percent'))
|
||||||
|
.toBe('C:\\Path with @at+plus$dollar%percent');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('capitalizes lowercase drive letters for Windows paths', () => {
|
||||||
|
expect(normalizePath('c:/windows/system32'))
|
||||||
|
.toBe('C:\\windows\\system32');
|
||||||
|
// WSL paths should always be preserved
|
||||||
|
expect(normalizePath('/mnt/d/my/folder'))
|
||||||
|
.toBe('/mnt/d/my/folder');
|
||||||
|
|
||||||
|
if (process.platform === 'win32') {
|
||||||
|
// On Windows, Unix-style paths should be converted and capitalized
|
||||||
|
expect(normalizePath('/e/another/folder'))
|
||||||
|
.toBe('E:\\another\\folder');
|
||||||
|
} else {
|
||||||
|
// On Linux, /e/ is just a regular Unix path
|
||||||
|
expect(normalizePath('/e/another/folder'))
|
||||||
|
.toBe('/e/another/folder');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles UNC paths correctly', () => {
|
||||||
|
// UNC paths should preserve the leading double backslash
|
||||||
|
const uncPath = '\\\\SERVER\\share\\folder';
|
||||||
|
expect(normalizePath(uncPath)).toBe('\\\\SERVER\\share\\folder');
|
||||||
|
|
||||||
|
// Test UNC path with double backslashes that need normalization
|
||||||
|
const uncPathWithDoubles = '\\\\\\\\SERVER\\\\share\\\\folder';
|
||||||
|
expect(normalizePath(uncPathWithDoubles)).toBe('\\\\SERVER\\share\\folder');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns normalized non-Windows/WSL/Unix-style Windows paths as is after basic normalization', () => {
|
||||||
|
// A path that looks somewhat absolute but isn't a drive or recognized Unix root for Windows conversion
|
||||||
|
// These paths should be preserved as-is (not converted to Windows C:\ format or WSL format)
|
||||||
|
const otherAbsolutePath = '\\someserver\\share\\file';
|
||||||
|
expect(normalizePath(otherAbsolutePath)).toBe(otherAbsolutePath);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('expandHome', () => {
|
||||||
|
it('expands ~ to home directory', () => {
|
||||||
|
const result = expandHome('~/test');
|
||||||
|
expect(result).toContain('test');
|
||||||
|
expect(result).not.toContain('~');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('expands bare ~ to home directory', () => {
|
||||||
|
const result = expandHome('~');
|
||||||
|
expect(result).not.toContain('~');
|
||||||
|
expect(result.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('leaves other paths unchanged', () => {
|
||||||
|
expect(expandHome('C:/test')).toBe('C:/test');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('WSL path handling (issue #2795 fix)', () => {
|
||||||
|
// Save original platform
|
||||||
|
const originalPlatform = process.platform;
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Restore platform after each test
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: originalPlatform,
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should NEVER convert WSL paths - they work correctly in WSL with Node.js fs', () => {
|
||||||
|
// The key insight: When running `wsl npx ...`, Node.js runs INSIDE WSL (process.platform === 'linux')
|
||||||
|
// and /mnt/c/ paths work correctly with Node.js fs operations in that environment.
|
||||||
|
// Converting them to C:\ format breaks fs operations because Windows paths don't work inside WSL.
|
||||||
|
|
||||||
|
// Mock Linux platform (inside WSL)
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'linux',
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// WSL paths should NOT be converted, even inside WSL
|
||||||
|
expect(normalizePath('/mnt/c/Users/username/folder'))
|
||||||
|
.toBe('/mnt/c/Users/username/folder');
|
||||||
|
|
||||||
|
expect(normalizePath('/mnt/d/Documents/project'))
|
||||||
|
.toBe('/mnt/d/Documents/project');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should also preserve WSL paths when running on Windows', () => {
|
||||||
|
// Mock Windows platform
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'win32',
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// WSL paths should still be preserved (though they wouldn't be accessible from Windows Node.js)
|
||||||
|
expect(normalizePath('/mnt/c/Users/username/folder'))
|
||||||
|
.toBe('/mnt/c/Users/username/folder');
|
||||||
|
|
||||||
|
expect(normalizePath('/mnt/d/Documents/project'))
|
||||||
|
.toBe('/mnt/d/Documents/project');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert Unix-style Windows paths (/c/) only when running on Windows (win32)', () => {
|
||||||
|
// Mock process.platform to be 'win32' (Windows)
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'win32',
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// Unix-style Windows paths like /c/ should be converted on Windows
|
||||||
|
expect(normalizePath('/c/Users/username/folder'))
|
||||||
|
.toBe('C:\\Users\\username\\folder');
|
||||||
|
|
||||||
|
expect(normalizePath('/d/Documents/project'))
|
||||||
|
.toBe('D:\\Documents\\project');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should NOT convert Unix-style paths (/c/) when running inside WSL (linux)', () => {
|
||||||
|
// Mock process.platform to be 'linux' (WSL/Linux)
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'linux',
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// When on Linux, /c/ is just a regular Unix directory, not a drive letter
|
||||||
|
expect(normalizePath('/c/some/path'))
|
||||||
|
.toBe('/c/some/path');
|
||||||
|
|
||||||
|
expect(normalizePath('/d/another/path'))
|
||||||
|
.toBe('/d/another/path');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve regular Unix paths on all platforms', () => {
|
||||||
|
// Test on Linux
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'linux',
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(normalizePath('/home/user/documents'))
|
||||||
|
.toBe('/home/user/documents');
|
||||||
|
|
||||||
|
expect(normalizePath('/var/log/app'))
|
||||||
|
.toBe('/var/log/app');
|
||||||
|
|
||||||
|
// Test on Windows (though these paths wouldn't work on Windows)
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'win32',
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(normalizePath('/home/user/documents'))
|
||||||
|
.toBe('/home/user/documents');
|
||||||
|
|
||||||
|
expect(normalizePath('/var/log/app'))
|
||||||
|
.toBe('/var/log/app');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('reproduces exact scenario from issue #2795', () => {
|
||||||
|
// Simulate running inside WSL: wsl npx @modelcontextprotocol/server-filesystem /mnt/c/Users/username/folder
|
||||||
|
Object.defineProperty(process, 'platform', {
|
||||||
|
value: 'linux',
|
||||||
|
writable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// This is the exact path from the issue
|
||||||
|
const inputPath = '/mnt/c/Users/username/folder';
|
||||||
|
const result = normalizePath(inputPath);
|
||||||
|
|
||||||
|
// Should NOT convert to C:\Users\username\folder
|
||||||
|
expect(result).toBe('/mnt/c/Users/username/folder');
|
||||||
|
expect(result).not.toContain('C:');
|
||||||
|
expect(result).not.toContain('\\');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle relative path slash conversion based on platform', () => {
|
||||||
|
// This test verifies platform-specific behavior naturally without mocking
|
||||||
|
// On Windows: forward slashes converted to backslashes
|
||||||
|
// On Linux/Unix: forward slashes preserved
|
||||||
|
const relativePath = 'some/relative/path';
|
||||||
|
const result = normalizePath(relativePath);
|
||||||
|
|
||||||
|
if (originalPlatform === 'win32') {
|
||||||
|
expect(result).toBe('some\\relative\\path');
|
||||||
|
} else {
|
||||||
|
expect(result).toBe('some/relative/path');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
953
mcpServer/modules/filesystem/__tests__/path-validation.test.ts
Normal file
953
mcpServer/modules/filesystem/__tests__/path-validation.test.ts
Normal file
@@ -0,0 +1,953 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import * as path from 'path';
|
||||||
|
import * as fs from 'fs/promises';
|
||||||
|
import * as os from 'os';
|
||||||
|
import { isPathWithinAllowedDirectories } from '../path-validation.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if the current environment supports symlink creation
|
||||||
|
*/
|
||||||
|
async function checkSymlinkSupport(): Promise<boolean> {
|
||||||
|
const testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'symlink-test-'));
|
||||||
|
try {
|
||||||
|
const targetFile = path.join(testDir, 'target.txt');
|
||||||
|
const linkFile = path.join(testDir, 'link.txt');
|
||||||
|
|
||||||
|
await fs.writeFile(targetFile, 'test');
|
||||||
|
await fs.symlink(targetFile, linkFile);
|
||||||
|
|
||||||
|
// If we get here, symlinks are supported
|
||||||
|
return true;
|
||||||
|
} catch (error) {
|
||||||
|
// EPERM indicates no symlink permissions
|
||||||
|
if ((error as NodeJS.ErrnoException).code === 'EPERM') {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
// Other errors might indicate a real problem
|
||||||
|
throw error;
|
||||||
|
} finally {
|
||||||
|
await fs.rm(testDir, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Global variable to store symlink support status
|
||||||
|
let symlinkSupported: boolean | null = null;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get cached symlink support status, checking once per test run
|
||||||
|
*/
|
||||||
|
async function getSymlinkSupport(): Promise<boolean> {
|
||||||
|
if (symlinkSupported === null) {
|
||||||
|
symlinkSupported = await checkSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log('\n⚠️ Symlink tests will be skipped - symlink creation not supported in this environment');
|
||||||
|
console.log(' On Windows, enable Developer Mode or run as Administrator to enable symlink tests');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return symlinkSupported;
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('Path Validation', () => {
|
||||||
|
it('allows exact directory match', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', allowed)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('allows subdirectories', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/src/index.js', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/deeply/nested/file.txt', allowed)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('blocks similar directory names (prefix vulnerability)', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project2', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project_backup', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project-old', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/projectile', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project.bak', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('blocks paths outside allowed directories', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/other', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/etc/passwd', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles multiple allowed directories', () => {
|
||||||
|
const allowed = ['/home/user/project1', '/home/user/project2'];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project1/src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project2/src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project3', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project1_backup', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project2-old', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('blocks parent and sibling directories', () => {
|
||||||
|
const allowed = ['/test/allowed'];
|
||||||
|
|
||||||
|
// Parent directory
|
||||||
|
expect(isPathWithinAllowedDirectories('/test', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/', allowed)).toBe(false);
|
||||||
|
|
||||||
|
// Sibling with common prefix
|
||||||
|
expect(isPathWithinAllowedDirectories('/test/allowed_sibling', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/test/allowed2', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles paths with special characters', () => {
|
||||||
|
const allowed = ['/home/user/my-project (v2)'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my-project (v2)', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my-project (v2)/src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my-project (v2)_backup', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my-project', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Input validation', () => {
|
||||||
|
it('rejects empty inputs', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories('', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', [])).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles trailing separators correctly', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Path with trailing separator should still match
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Allowed directory with trailing separator
|
||||||
|
const allowedWithSep = ['/home/user/project/'];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', allowedWithSep)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/', allowedWithSep)).toBe(true);
|
||||||
|
|
||||||
|
// Should still block similar names with or without trailing separators
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project2', allowedWithSep)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project2', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project2/', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('skips empty directory entries in allowed list', () => {
|
||||||
|
const allowed = ['', '/home/user/project', ''];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/src', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Should still validate properly with empty entries
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/other', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles Windows paths with trailing separators', () => {
|
||||||
|
if (path.sep === '\\') {
|
||||||
|
const allowed = ['C:\\Users\\project'];
|
||||||
|
|
||||||
|
// Path with trailing separator
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project\\', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Allowed with trailing separator
|
||||||
|
const allowedWithSep = ['C:\\Users\\project\\'];
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project', allowedWithSep)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project\\', allowedWithSep)).toBe(true);
|
||||||
|
|
||||||
|
// Should still block similar names
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project2\\', allowed)).toBe(false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Error handling', () => {
|
||||||
|
it('normalizes relative paths to absolute', () => {
|
||||||
|
const allowed = [process.cwd()];
|
||||||
|
|
||||||
|
// Relative paths get normalized to absolute paths based on cwd
|
||||||
|
expect(isPathWithinAllowedDirectories('relative/path', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('./file', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Parent directory references that escape allowed directory
|
||||||
|
const parentAllowed = ['/home/user/project'];
|
||||||
|
expect(isPathWithinAllowedDirectories('../parent', parentAllowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns false for relative paths in allowed directories', () => {
|
||||||
|
const badAllowed = ['relative/path', '/some/other/absolute/path'];
|
||||||
|
|
||||||
|
// Relative paths in allowed dirs are normalized to absolute based on cwd
|
||||||
|
// The normalized 'relative/path' won't match our test path
|
||||||
|
expect(isPathWithinAllowedDirectories('/some/other/absolute/path/file', badAllowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/absolute/path/file', badAllowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles null and undefined inputs gracefully', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Should return false, not crash
|
||||||
|
expect(isPathWithinAllowedDirectories(null as any, allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories(undefined as any, allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/path', null as any)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/path', undefined as any)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Unicode and special characters', () => {
|
||||||
|
it('handles unicode characters in paths', () => {
|
||||||
|
const allowed = ['/home/user/café'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/café', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/café/file', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Different unicode representation won't match (not normalized)
|
||||||
|
const decomposed = '/home/user/cafe\u0301'; // e + combining accent
|
||||||
|
expect(isPathWithinAllowedDirectories(decomposed, allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles paths with spaces correctly', () => {
|
||||||
|
const allowed = ['/home/user/my project'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my project', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my project/file', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Partial matches should fail
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/my proj', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Overlapping allowed directories', () => {
|
||||||
|
it('handles nested allowed directories correctly', () => {
|
||||||
|
const allowed = ['/home', '/home/user', '/home/user/project'];
|
||||||
|
|
||||||
|
// All paths under /home are allowed
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/anything', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/anything', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/anything', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// First match wins (most permissive)
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/other/deep/path', allowed)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles root directory as allowed', () => {
|
||||||
|
const allowed = ['/'];
|
||||||
|
|
||||||
|
// Everything is allowed under root (dangerous configuration)
|
||||||
|
expect(isPathWithinAllowedDirectories('/', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/any/path', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/etc/passwd', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/secret', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// But only on the same filesystem root
|
||||||
|
if (path.sep === '\\') {
|
||||||
|
expect(isPathWithinAllowedDirectories('D:\\other', ['/'])).toBe(false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Cross-platform behavior', () => {
|
||||||
|
it('handles Windows-style paths on Windows', () => {
|
||||||
|
if (path.sep === '\\') {
|
||||||
|
const allowed = ['C:\\Users\\project'];
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project\\src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project2', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users\\project_backup', allowed)).toBe(false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles Unix-style paths on Unix', () => {
|
||||||
|
if (path.sep === '/') {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project2', allowed)).toBe(false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Validation Tests - Path Traversal', () => {
|
||||||
|
it('blocks path traversal attempts', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Basic traversal attempts
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/../../../etc/passwd', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/../../other', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/../project2', allowed)).toBe(false);
|
||||||
|
|
||||||
|
// Mixed traversal with valid segments
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/src/../../project2', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/./../../other', allowed)).toBe(false);
|
||||||
|
|
||||||
|
// Multiple traversal sequences
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/../project/../../../etc', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('blocks traversal in allowed directories', () => {
|
||||||
|
const allowed = ['/home/user/project/../safe'];
|
||||||
|
|
||||||
|
// The allowed directory itself should be normalized and safe
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/safe/file', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/file', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles complex traversal patterns', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Double dots in filenames (not traversal) - these normalize to paths within allowed dir
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/..test', allowed)).toBe(true); // Not traversal
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/test..', allowed)).toBe(true); // Not traversal
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/te..st', allowed)).toBe(true); // Not traversal
|
||||||
|
|
||||||
|
// Actual traversal
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/../test', allowed)).toBe(false); // Is traversal - goes to /home/user/test
|
||||||
|
|
||||||
|
// Edge case: /home/user/project/.. normalizes to /home/user (parent dir)
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/..', allowed)).toBe(false); // Goes to parent
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Validation Tests - Null Bytes', () => {
|
||||||
|
it('rejects paths with null bytes', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project\x00/etc/passwd', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/test\x00.txt', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('\x00/home/user/project', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/\x00', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('rejects allowed directories with null bytes', () => {
|
||||||
|
const allowed = ['/home/user/project\x00'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/file', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Validation Tests - Special Characters', () => {
|
||||||
|
it('allows percent signs in filenames', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Percent is a valid filename character
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/report_50%.pdf', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/Q1_25%_growth', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/%41', allowed)).toBe(true); // File named %41
|
||||||
|
|
||||||
|
// URL encoding is NOT decoded by path.normalize, so these are just odd filenames
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/%2e%2e', allowed)).toBe(true); // File named "%2e%2e"
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/file%20name', allowed)).toBe(true); // File with %20 in name
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles percent signs in allowed directories', () => {
|
||||||
|
const allowed = ['/home/user/project%20files'];
|
||||||
|
|
||||||
|
// This is a directory literally named "project%20files"
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project%20files/test', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project files/test', allowed)).toBe(false); // Different dir
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Path Normalization', () => {
|
||||||
|
it('normalizes paths before comparison', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Trailing slashes
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project//', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project///', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Current directory references
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/./src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/./project/src', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Multiple slashes
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project//src//file', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home//user//project//src', allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Should still block outside paths
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user//project2', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles mixed separators correctly', () => {
|
||||||
|
if (path.sep === '\\') {
|
||||||
|
const allowed = ['C:\\Users\\project'];
|
||||||
|
|
||||||
|
// Mixed separators should be normalized
|
||||||
|
expect(isPathWithinAllowedDirectories('C:/Users/project', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('C:\\Users/project\\src', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('C:/Users\\project/src', allowed)).toBe(true);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Edge Cases', () => {
|
||||||
|
it('rejects non-string inputs safely', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories(123 as any, allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories({} as any, allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories([] as any, allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories(null as any, allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories(undefined as any, allowed)).toBe(false);
|
||||||
|
|
||||||
|
// Non-string in allowed directories
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', [123 as any])).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project', [{} as any])).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles very long paths', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Create a very long path that's still valid
|
||||||
|
const longSubPath = 'a/'.repeat(1000) + 'file.txt';
|
||||||
|
expect(isPathWithinAllowedDirectories(`/home/user/project/${longSubPath}`, allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Very long path that escapes
|
||||||
|
const escapePath = 'a/'.repeat(1000) + '../'.repeat(1001) + 'etc/passwd';
|
||||||
|
expect(isPathWithinAllowedDirectories(`/home/user/project/${escapePath}`, allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Additional Coverage', () => {
|
||||||
|
it('handles allowed directories with traversal that normalizes safely', () => {
|
||||||
|
// These allowed dirs contain traversal but normalize to valid paths
|
||||||
|
const allowed = ['/home/user/../user/project'];
|
||||||
|
|
||||||
|
// Should normalize to /home/user/project and work correctly
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/file', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/other', allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles symbolic dots in filenames', () => {
|
||||||
|
const allowed = ['/home/user/project'];
|
||||||
|
|
||||||
|
// Single and double dots as actual filenames (not traversal)
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/.', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/..', allowed)).toBe(false); // This normalizes to parent
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/...', allowed)).toBe(true); // Three dots is a valid filename
|
||||||
|
expect(isPathWithinAllowedDirectories('/home/user/project/....', allowed)).toBe(true); // Four dots is a valid filename
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles UNC paths on Windows', () => {
|
||||||
|
if (path.sep === '\\') {
|
||||||
|
const allowed = ['\\\\server\\share\\project'];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories('\\\\server\\share\\project', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('\\\\server\\share\\project\\file', allowed)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories('\\\\server\\share\\other', allowed)).toBe(false);
|
||||||
|
expect(isPathWithinAllowedDirectories('\\\\other\\share\\project', allowed)).toBe(false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Symlink Tests', () => {
|
||||||
|
let testDir: string;
|
||||||
|
let allowedDir: string;
|
||||||
|
let forbiddenDir: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'fs-error-test-'));
|
||||||
|
allowedDir = path.join(testDir, 'allowed');
|
||||||
|
forbiddenDir = path.join(testDir, 'forbidden');
|
||||||
|
|
||||||
|
await fs.mkdir(allowedDir, { recursive: true });
|
||||||
|
await fs.mkdir(forbiddenDir, { recursive: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await fs.rm(testDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('validates symlink handling', async () => {
|
||||||
|
// Test with symlinks
|
||||||
|
try {
|
||||||
|
const linkPath = path.join(allowedDir, 'bad-link');
|
||||||
|
const targetPath = path.join(forbiddenDir, 'target.txt');
|
||||||
|
|
||||||
|
await fs.writeFile(targetPath, 'content');
|
||||||
|
await fs.symlink(targetPath, linkPath);
|
||||||
|
|
||||||
|
// In real implementation, this would throw with the resolved path
|
||||||
|
const realPath = await fs.realpath(linkPath);
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
|
||||||
|
// Symlink target should be outside allowed directory
|
||||||
|
expect(isPathWithinAllowedDirectories(realPath, allowed)).toBe(false);
|
||||||
|
} catch (error) {
|
||||||
|
// Skip if no symlink permissions
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles non-existent paths correctly', async () => {
|
||||||
|
const newFilePath = path.join(allowedDir, 'subdir', 'newfile.txt');
|
||||||
|
|
||||||
|
// Parent directory doesn't exist
|
||||||
|
try {
|
||||||
|
await fs.access(newFilePath);
|
||||||
|
} catch (error) {
|
||||||
|
expect((error as NodeJS.ErrnoException).code).toBe('ENOENT');
|
||||||
|
}
|
||||||
|
|
||||||
|
// After creating parent, validation should work
|
||||||
|
await fs.mkdir(path.dirname(newFilePath), { recursive: true });
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
expect(isPathWithinAllowedDirectories(newFilePath, allowed)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Test path resolution consistency for symlinked files
|
||||||
|
it('validates symlinked files consistently between path and resolved forms', async () => {
|
||||||
|
try {
|
||||||
|
// Setup: Create target file in forbidden area
|
||||||
|
const targetFile = path.join(forbiddenDir, 'target.txt');
|
||||||
|
await fs.writeFile(targetFile, 'TARGET_CONTENT');
|
||||||
|
|
||||||
|
// Create symlink inside allowed directory pointing to forbidden file
|
||||||
|
const symlinkPath = path.join(allowedDir, 'link-to-target.txt');
|
||||||
|
await fs.symlink(targetFile, symlinkPath);
|
||||||
|
|
||||||
|
// The symlink path itself passes validation (looks like it's in allowed dir)
|
||||||
|
expect(isPathWithinAllowedDirectories(symlinkPath, [allowedDir])).toBe(true);
|
||||||
|
|
||||||
|
// But the resolved path should fail validation
|
||||||
|
const resolvedPath = await fs.realpath(symlinkPath);
|
||||||
|
expect(isPathWithinAllowedDirectories(resolvedPath, [allowedDir])).toBe(false);
|
||||||
|
|
||||||
|
// Verify the resolved path goes to the forbidden location (normalize both paths for macOS temp dirs)
|
||||||
|
expect(await fs.realpath(resolvedPath)).toBe(await fs.realpath(targetFile));
|
||||||
|
} catch (error) {
|
||||||
|
// Skip if no symlink permissions on the system
|
||||||
|
if ((error as NodeJS.ErrnoException).code !== 'EPERM') {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Test allowed directory resolution behavior
|
||||||
|
it('validates paths correctly when allowed directory is resolved from symlink', async () => {
|
||||||
|
try {
|
||||||
|
// Setup: Create the actual target directory with content
|
||||||
|
const actualTargetDir = path.join(testDir, 'actual-target');
|
||||||
|
await fs.mkdir(actualTargetDir, { recursive: true });
|
||||||
|
const targetFile = path.join(actualTargetDir, 'file.txt');
|
||||||
|
await fs.writeFile(targetFile, 'FILE_CONTENT');
|
||||||
|
|
||||||
|
// Setup: Create symlink directory that points to target
|
||||||
|
const symlinkDir = path.join(testDir, 'symlink-dir');
|
||||||
|
await fs.symlink(actualTargetDir, symlinkDir);
|
||||||
|
|
||||||
|
// Simulate resolved allowed directory (what the server startup should do)
|
||||||
|
const resolvedAllowedDir = await fs.realpath(symlinkDir);
|
||||||
|
const resolvedTargetDir = await fs.realpath(actualTargetDir);
|
||||||
|
expect(resolvedAllowedDir).toBe(resolvedTargetDir);
|
||||||
|
|
||||||
|
// Test 1: File access through original symlink path should pass validation with resolved allowed dir
|
||||||
|
const fileViaSymlink = path.join(symlinkDir, 'file.txt');
|
||||||
|
const resolvedFile = await fs.realpath(fileViaSymlink);
|
||||||
|
expect(isPathWithinAllowedDirectories(resolvedFile, [resolvedAllowedDir])).toBe(true);
|
||||||
|
|
||||||
|
// Test 2: File access through resolved path should also pass validation
|
||||||
|
const fileViaResolved = path.join(resolvedTargetDir, 'file.txt');
|
||||||
|
expect(isPathWithinAllowedDirectories(fileViaResolved, [resolvedAllowedDir])).toBe(true);
|
||||||
|
|
||||||
|
// Test 3: Demonstrate inconsistent behavior with unresolved allowed directories
|
||||||
|
// If allowed dirs were not resolved (storing symlink paths instead):
|
||||||
|
const unresolvedAllowedDirs = [symlinkDir];
|
||||||
|
// This validation would incorrectly fail for the same content:
|
||||||
|
expect(isPathWithinAllowedDirectories(resolvedFile, unresolvedAllowedDirs)).toBe(false);
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
// Skip if no symlink permissions on the system
|
||||||
|
if ((error as NodeJS.ErrnoException).code !== 'EPERM') {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('resolves nested symlink chains completely', async () => {
|
||||||
|
try {
|
||||||
|
// Setup: Create target file in forbidden area
|
||||||
|
const actualTarget = path.join(forbiddenDir, 'target-file.txt');
|
||||||
|
await fs.writeFile(actualTarget, 'FINAL_CONTENT');
|
||||||
|
|
||||||
|
// Create chain of symlinks: allowedFile -> link2 -> link1 -> actualTarget
|
||||||
|
const link1 = path.join(testDir, 'intermediate-link1');
|
||||||
|
const link2 = path.join(testDir, 'intermediate-link2');
|
||||||
|
const allowedFile = path.join(allowedDir, 'seemingly-safe-file');
|
||||||
|
|
||||||
|
await fs.symlink(actualTarget, link1);
|
||||||
|
await fs.symlink(link1, link2);
|
||||||
|
await fs.symlink(link2, allowedFile);
|
||||||
|
|
||||||
|
// The allowed file path passes basic validation
|
||||||
|
expect(isPathWithinAllowedDirectories(allowedFile, [allowedDir])).toBe(true);
|
||||||
|
|
||||||
|
// But complete resolution reveals the forbidden target
|
||||||
|
const fullyResolvedPath = await fs.realpath(allowedFile);
|
||||||
|
expect(isPathWithinAllowedDirectories(fullyResolvedPath, [allowedDir])).toBe(false);
|
||||||
|
expect(await fs.realpath(fullyResolvedPath)).toBe(await fs.realpath(actualTarget));
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
// Skip if no symlink permissions on the system
|
||||||
|
if ((error as NodeJS.ErrnoException).code !== 'EPERM') {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('Path Validation Race Condition Tests', () => {
|
||||||
|
let testDir: string;
|
||||||
|
let allowedDir: string;
|
||||||
|
let forbiddenDir: string;
|
||||||
|
let targetFile: string;
|
||||||
|
let testPath: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'race-test-'));
|
||||||
|
allowedDir = path.join(testDir, 'allowed');
|
||||||
|
forbiddenDir = path.join(testDir, 'outside');
|
||||||
|
targetFile = path.join(forbiddenDir, 'target.txt');
|
||||||
|
testPath = path.join(allowedDir, 'test.txt');
|
||||||
|
|
||||||
|
await fs.mkdir(allowedDir, { recursive: true });
|
||||||
|
await fs.mkdir(forbiddenDir, { recursive: true });
|
||||||
|
await fs.writeFile(targetFile, 'ORIGINAL CONTENT', 'utf-8');
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await fs.rm(testDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('validates non-existent file paths based on parent directory', async () => {
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories(testPath, allowed)).toBe(true);
|
||||||
|
await expect(fs.access(testPath)).rejects.toThrow();
|
||||||
|
|
||||||
|
const parentDir = path.dirname(testPath);
|
||||||
|
expect(isPathWithinAllowedDirectories(parentDir, allowed)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('demonstrates symlink race condition allows writing outside allowed directories', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping symlink race condition test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
|
||||||
|
await expect(fs.access(testPath)).rejects.toThrow();
|
||||||
|
expect(isPathWithinAllowedDirectories(testPath, allowed)).toBe(true);
|
||||||
|
|
||||||
|
await fs.symlink(targetFile, testPath);
|
||||||
|
await fs.writeFile(testPath, 'MODIFIED CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
const targetContent = await fs.readFile(targetFile, 'utf-8');
|
||||||
|
expect(targetContent).toBe('MODIFIED CONTENT');
|
||||||
|
|
||||||
|
const resolvedPath = await fs.realpath(testPath);
|
||||||
|
expect(isPathWithinAllowedDirectories(resolvedPath, allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('shows timing differences between validation approaches', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping timing validation test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
|
||||||
|
const validation1 = isPathWithinAllowedDirectories(testPath, allowed);
|
||||||
|
expect(validation1).toBe(true);
|
||||||
|
|
||||||
|
await fs.symlink(targetFile, testPath);
|
||||||
|
|
||||||
|
const resolvedPath = await fs.realpath(testPath);
|
||||||
|
const validation2 = isPathWithinAllowedDirectories(resolvedPath, allowed);
|
||||||
|
expect(validation2).toBe(false);
|
||||||
|
|
||||||
|
expect(validation1).not.toBe(validation2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('validates directory creation timing', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping directory creation timing test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const testDir = path.join(allowedDir, 'newdir');
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories(testDir, allowed)).toBe(true);
|
||||||
|
|
||||||
|
await fs.symlink(forbiddenDir, testDir);
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories(testDir, allowed)).toBe(true);
|
||||||
|
|
||||||
|
const resolved = await fs.realpath(testDir);
|
||||||
|
expect(isPathWithinAllowedDirectories(resolved, allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('demonstrates exclusive file creation behavior', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping exclusive file creation test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
|
||||||
|
await fs.symlink(targetFile, testPath);
|
||||||
|
|
||||||
|
await expect(fs.open(testPath, 'wx')).rejects.toThrow(/EEXIST/);
|
||||||
|
|
||||||
|
await fs.writeFile(testPath, 'NEW CONTENT', 'utf-8');
|
||||||
|
const targetContent = await fs.readFile(targetFile, 'utf-8');
|
||||||
|
expect(targetContent).toBe('NEW CONTENT');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use resolved parent paths for non-existent files', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping resolved parent paths test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
|
||||||
|
const symlinkDir = path.join(allowedDir, 'link');
|
||||||
|
await fs.symlink(forbiddenDir, symlinkDir);
|
||||||
|
|
||||||
|
const fileThroughSymlink = path.join(symlinkDir, 'newfile.txt');
|
||||||
|
|
||||||
|
expect(fileThroughSymlink.startsWith(allowedDir)).toBe(true);
|
||||||
|
|
||||||
|
const parentDir = path.dirname(fileThroughSymlink);
|
||||||
|
const resolvedParent = await fs.realpath(parentDir);
|
||||||
|
expect(isPathWithinAllowedDirectories(resolvedParent, allowed)).toBe(false);
|
||||||
|
|
||||||
|
const expectedSafePath = path.join(resolvedParent, path.basename(fileThroughSymlink));
|
||||||
|
expect(isPathWithinAllowedDirectories(expectedSafePath, allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('demonstrates parent directory symlink traversal', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping parent directory symlink traversal test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const deepPath = path.join(allowedDir, 'sub1', 'sub2', 'file.txt');
|
||||||
|
|
||||||
|
expect(isPathWithinAllowedDirectories(deepPath, allowed)).toBe(true);
|
||||||
|
|
||||||
|
const sub1Path = path.join(allowedDir, 'sub1');
|
||||||
|
await fs.symlink(forbiddenDir, sub1Path);
|
||||||
|
|
||||||
|
await fs.mkdir(path.join(sub1Path, 'sub2'), { recursive: true });
|
||||||
|
await fs.writeFile(deepPath, 'CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
const realPath = await fs.realpath(deepPath);
|
||||||
|
const realAllowedDir = await fs.realpath(allowedDir);
|
||||||
|
const realForbiddenDir = await fs.realpath(forbiddenDir);
|
||||||
|
|
||||||
|
expect(realPath.startsWith(realAllowedDir)).toBe(false);
|
||||||
|
expect(realPath.startsWith(realForbiddenDir)).toBe(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should prevent race condition between validatePath and file operation', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping race condition prevention test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const racePath = path.join(allowedDir, 'race-file.txt');
|
||||||
|
const targetFile = path.join(forbiddenDir, 'target.txt');
|
||||||
|
|
||||||
|
await fs.writeFile(targetFile, 'ORIGINAL CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Path validation would pass (file doesn't exist, parent is in allowed dir)
|
||||||
|
expect(await fs.access(racePath).then(() => false).catch(() => true)).toBe(true);
|
||||||
|
expect(isPathWithinAllowedDirectories(racePath, allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Race condition: symlink created after validation but before write
|
||||||
|
await fs.symlink(targetFile, racePath);
|
||||||
|
|
||||||
|
// With exclusive write flag, write should fail on symlink
|
||||||
|
await expect(
|
||||||
|
fs.writeFile(racePath, 'NEW CONTENT', { encoding: 'utf-8', flag: 'wx' })
|
||||||
|
).rejects.toThrow(/EEXIST/);
|
||||||
|
|
||||||
|
// Verify content unchanged
|
||||||
|
const targetContent = await fs.readFile(targetFile, 'utf-8');
|
||||||
|
expect(targetContent).toBe('ORIGINAL CONTENT');
|
||||||
|
|
||||||
|
// The symlink exists but write was blocked
|
||||||
|
const actualWritePath = await fs.realpath(racePath);
|
||||||
|
expect(actualWritePath).toBe(await fs.realpath(targetFile));
|
||||||
|
expect(isPathWithinAllowedDirectories(actualWritePath, allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should allow overwrites to legitimate files within allowed directories', async () => {
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const legitFile = path.join(allowedDir, 'legit-file.txt');
|
||||||
|
|
||||||
|
// Create a legitimate file
|
||||||
|
await fs.writeFile(legitFile, 'ORIGINAL', 'utf-8');
|
||||||
|
|
||||||
|
// Opening with w should work for legitimate files
|
||||||
|
const fd = await fs.open(legitFile, 'w');
|
||||||
|
try {
|
||||||
|
await fd.write('UPDATED', 0, 'utf-8');
|
||||||
|
} finally {
|
||||||
|
await fd.close();
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = await fs.readFile(legitFile, 'utf-8');
|
||||||
|
expect(content).toBe('UPDATED');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle symlinks that point within allowed directories', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping symlinks within allowed directories test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const targetFile = path.join(allowedDir, 'target.txt');
|
||||||
|
const symlinkPath = path.join(allowedDir, 'symlink.txt');
|
||||||
|
|
||||||
|
// Create target file within allowed directory
|
||||||
|
await fs.writeFile(targetFile, 'TARGET CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Create symlink pointing to allowed file
|
||||||
|
await fs.symlink(targetFile, symlinkPath);
|
||||||
|
|
||||||
|
// Opening symlink with w follows it to the target
|
||||||
|
const fd = await fs.open(symlinkPath, 'w');
|
||||||
|
try {
|
||||||
|
await fd.write('UPDATED VIA SYMLINK', 0, 'utf-8');
|
||||||
|
} finally {
|
||||||
|
await fd.close();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Both symlink and target should show updated content
|
||||||
|
const symlinkContent = await fs.readFile(symlinkPath, 'utf-8');
|
||||||
|
const targetContent = await fs.readFile(targetFile, 'utf-8');
|
||||||
|
expect(symlinkContent).toBe('UPDATED VIA SYMLINK');
|
||||||
|
expect(targetContent).toBe('UPDATED VIA SYMLINK');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should prevent overwriting files through symlinks pointing outside allowed directories', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping symlink overwrite prevention test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const legitFile = path.join(allowedDir, 'existing.txt');
|
||||||
|
const targetFile = path.join(forbiddenDir, 'target.txt');
|
||||||
|
|
||||||
|
// Create a legitimate file first
|
||||||
|
await fs.writeFile(legitFile, 'LEGIT CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Create target file in forbidden directory
|
||||||
|
await fs.writeFile(targetFile, 'FORBIDDEN CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Now replace the legitimate file with a symlink to forbidden location
|
||||||
|
await fs.unlink(legitFile);
|
||||||
|
await fs.symlink(targetFile, legitFile);
|
||||||
|
|
||||||
|
// Simulate the server's validation logic
|
||||||
|
const stats = await fs.lstat(legitFile);
|
||||||
|
expect(stats.isSymbolicLink()).toBe(true);
|
||||||
|
|
||||||
|
const realPath = await fs.realpath(legitFile);
|
||||||
|
expect(isPathWithinAllowedDirectories(realPath, allowed)).toBe(false);
|
||||||
|
|
||||||
|
// With atomic rename, symlinks are replaced not followed
|
||||||
|
// So this test now demonstrates the protection
|
||||||
|
|
||||||
|
// Verify content remains unchanged
|
||||||
|
const targetContent = await fs.readFile(targetFile, 'utf-8');
|
||||||
|
expect(targetContent).toBe('FORBIDDEN CONTENT');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('demonstrates race condition in read operations', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping race condition in read operations test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const legitFile = path.join(allowedDir, 'readable.txt');
|
||||||
|
const secretFile = path.join(forbiddenDir, 'secret.txt');
|
||||||
|
|
||||||
|
// Create legitimate file
|
||||||
|
await fs.writeFile(legitFile, 'PUBLIC CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Create secret file in forbidden directory
|
||||||
|
await fs.writeFile(secretFile, 'SECRET CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Step 1: validatePath would pass for legitimate file
|
||||||
|
expect(isPathWithinAllowedDirectories(legitFile, allowed)).toBe(true);
|
||||||
|
|
||||||
|
// Step 2: Race condition - replace file with symlink after validation
|
||||||
|
await fs.unlink(legitFile);
|
||||||
|
await fs.symlink(secretFile, legitFile);
|
||||||
|
|
||||||
|
// Step 3: Read operation follows symlink to forbidden location
|
||||||
|
const content = await fs.readFile(legitFile, 'utf-8');
|
||||||
|
|
||||||
|
// This shows the vulnerability - we read forbidden content
|
||||||
|
expect(content).toBe('SECRET CONTENT');
|
||||||
|
expect(isPathWithinAllowedDirectories(await fs.realpath(legitFile), allowed)).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('verifies rename does not follow symlinks', async () => {
|
||||||
|
const symlinkSupported = await getSymlinkSupport();
|
||||||
|
if (!symlinkSupported) {
|
||||||
|
console.log(' ⏭️ Skipping rename symlink test - symlinks not supported');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const allowed = [allowedDir];
|
||||||
|
const tempFile = path.join(allowedDir, 'temp.txt');
|
||||||
|
const targetSymlink = path.join(allowedDir, 'target-symlink.txt');
|
||||||
|
const forbiddenTarget = path.join(forbiddenDir, 'forbidden-target.txt');
|
||||||
|
|
||||||
|
// Create forbidden target
|
||||||
|
await fs.writeFile(forbiddenTarget, 'ORIGINAL CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Create symlink pointing to forbidden location
|
||||||
|
await fs.symlink(forbiddenTarget, targetSymlink);
|
||||||
|
|
||||||
|
// Write temp file
|
||||||
|
await fs.writeFile(tempFile, 'NEW CONTENT', 'utf-8');
|
||||||
|
|
||||||
|
// Rename temp file to symlink path
|
||||||
|
await fs.rename(tempFile, targetSymlink);
|
||||||
|
|
||||||
|
// Check what happened
|
||||||
|
const symlinkExists = await fs.lstat(targetSymlink).then(() => true).catch(() => false);
|
||||||
|
const isSymlink = symlinkExists && (await fs.lstat(targetSymlink)).isSymbolicLink();
|
||||||
|
const targetContent = await fs.readFile(targetSymlink, 'utf-8');
|
||||||
|
const forbiddenContent = await fs.readFile(forbiddenTarget, 'utf-8');
|
||||||
|
|
||||||
|
// Rename should replace the symlink with a regular file
|
||||||
|
expect(isSymlink).toBe(false);
|
||||||
|
expect(targetContent).toBe('NEW CONTENT');
|
||||||
|
expect(forbiddenContent).toBe('ORIGINAL CONTENT'); // Unchanged
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
84
mcpServer/modules/filesystem/__tests__/roots-utils.test.ts
Normal file
84
mcpServer/modules/filesystem/__tests__/roots-utils.test.ts
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { getValidRootDirectories } from '../roots-utils.js';
|
||||||
|
import { mkdtempSync, rmSync, mkdirSync, writeFileSync, realpathSync } from 'fs';
|
||||||
|
import { tmpdir } from 'os';
|
||||||
|
import { join } from 'path';
|
||||||
|
import type { Root } from '@modelcontextprotocol/sdk/types.js';
|
||||||
|
|
||||||
|
describe('getValidRootDirectories', () => {
|
||||||
|
let testDir1: string;
|
||||||
|
let testDir2: string;
|
||||||
|
let testDir3: string;
|
||||||
|
let testFile: string;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Create test directories
|
||||||
|
testDir1 = realpathSync(mkdtempSync(join(tmpdir(), 'mcp-roots-test1-')));
|
||||||
|
testDir2 = realpathSync(mkdtempSync(join(tmpdir(), 'mcp-roots-test2-')));
|
||||||
|
testDir3 = realpathSync(mkdtempSync(join(tmpdir(), 'mcp-roots-test3-')));
|
||||||
|
|
||||||
|
// Create a test file (not a directory)
|
||||||
|
testFile = join(testDir1, 'test-file.txt');
|
||||||
|
writeFileSync(testFile, 'test content');
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Cleanup
|
||||||
|
rmSync(testDir1, { recursive: true, force: true });
|
||||||
|
rmSync(testDir2, { recursive: true, force: true });
|
||||||
|
rmSync(testDir3, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('valid directory processing', () => {
|
||||||
|
it('should process all URI formats and edge cases', async () => {
|
||||||
|
const roots = [
|
||||||
|
{ uri: `file://${testDir1}`, name: 'File URI' },
|
||||||
|
{ uri: testDir2, name: 'Plain path' },
|
||||||
|
{ uri: testDir3 } // Plain path without name property
|
||||||
|
];
|
||||||
|
|
||||||
|
const result = await getValidRootDirectories(roots);
|
||||||
|
|
||||||
|
expect(result).toContain(testDir1);
|
||||||
|
expect(result).toContain(testDir2);
|
||||||
|
expect(result).toContain(testDir3);
|
||||||
|
expect(result).toHaveLength(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should normalize complex paths', async () => {
|
||||||
|
const subDir = join(testDir1, 'subdir');
|
||||||
|
mkdirSync(subDir);
|
||||||
|
|
||||||
|
const roots = [
|
||||||
|
{ uri: `file://${testDir1}/./subdir/../subdir`, name: 'Complex Path' }
|
||||||
|
];
|
||||||
|
|
||||||
|
const result = await getValidRootDirectories(roots);
|
||||||
|
|
||||||
|
expect(result).toHaveLength(1);
|
||||||
|
expect(result[0]).toBe(subDir);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('error handling', () => {
|
||||||
|
|
||||||
|
it('should handle various error types', async () => {
|
||||||
|
const nonExistentDir = join(tmpdir(), 'non-existent-directory-12345');
|
||||||
|
const invalidPath = '\0invalid\0path'; // Null bytes cause different error types
|
||||||
|
const roots = [
|
||||||
|
{ uri: `file://${testDir1}`, name: 'Valid Dir' },
|
||||||
|
{ uri: `file://${nonExistentDir}`, name: 'Non-existent Dir' },
|
||||||
|
{ uri: `file://${testFile}`, name: 'File Not Dir' },
|
||||||
|
{ uri: `file://${invalidPath}`, name: 'Invalid Path' }
|
||||||
|
];
|
||||||
|
|
||||||
|
const result = await getValidRootDirectories(roots);
|
||||||
|
|
||||||
|
expect(result).toContain(testDir1);
|
||||||
|
expect(result).not.toContain(nonExistentDir);
|
||||||
|
expect(result).not.toContain(testFile);
|
||||||
|
expect(result).not.toContain(invalidPath);
|
||||||
|
expect(result).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -0,0 +1,100 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { spawn } from 'child_process';
|
||||||
|
import * as path from 'path';
|
||||||
|
import * as fs from 'fs/promises';
|
||||||
|
import * as os from 'os';
|
||||||
|
|
||||||
|
const SERVER_PATH = path.join(__dirname, '..', 'dist', 'index.js');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Spawns the filesystem server with given arguments and returns exit info
|
||||||
|
*/
|
||||||
|
async function spawnServer(args: string[], timeoutMs = 2000): Promise<{ exitCode: number | null; stderr: string }> {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
const proc = spawn('node', [SERVER_PATH, ...args], {
|
||||||
|
stdio: ['pipe', 'pipe', 'pipe'],
|
||||||
|
});
|
||||||
|
|
||||||
|
let stderr = '';
|
||||||
|
proc.stderr?.on('data', (data) => {
|
||||||
|
stderr += data.toString();
|
||||||
|
});
|
||||||
|
|
||||||
|
const timeout = setTimeout(() => {
|
||||||
|
proc.kill('SIGTERM');
|
||||||
|
}, timeoutMs);
|
||||||
|
|
||||||
|
proc.on('close', (code) => {
|
||||||
|
clearTimeout(timeout);
|
||||||
|
resolve({ exitCode: code, stderr });
|
||||||
|
});
|
||||||
|
|
||||||
|
proc.on('error', (err) => {
|
||||||
|
clearTimeout(timeout);
|
||||||
|
resolve({ exitCode: 1, stderr: err.message });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('Startup Directory Validation', () => {
|
||||||
|
let testDir: string;
|
||||||
|
let accessibleDir: string;
|
||||||
|
let accessibleDir2: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'fs-startup-test-'));
|
||||||
|
accessibleDir = path.join(testDir, 'accessible');
|
||||||
|
accessibleDir2 = path.join(testDir, 'accessible2');
|
||||||
|
await fs.mkdir(accessibleDir, { recursive: true });
|
||||||
|
await fs.mkdir(accessibleDir2, { recursive: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await fs.rm(testDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should start successfully with all accessible directories', async () => {
|
||||||
|
const result = await spawnServer([accessibleDir, accessibleDir2]);
|
||||||
|
// Server starts and runs (we kill it after timeout, so exit code is null or from SIGTERM)
|
||||||
|
expect(result.stderr).toContain('Secure MCP Filesystem Server running on SSE');
|
||||||
|
expect(result.stderr).not.toContain('Error:');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should skip inaccessible directory and continue with accessible one', async () => {
|
||||||
|
const nonExistentDir = path.join(testDir, 'non-existent-dir-12345');
|
||||||
|
|
||||||
|
const result = await spawnServer([nonExistentDir, accessibleDir]);
|
||||||
|
|
||||||
|
// Should warn about inaccessible directory
|
||||||
|
expect(result.stderr).toContain('Warning: Cannot access directory');
|
||||||
|
expect(result.stderr).toContain(nonExistentDir);
|
||||||
|
|
||||||
|
// Should still start successfully
|
||||||
|
expect(result.stderr).toContain('Secure MCP Filesystem Server running on SSE');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should exit with error when ALL directories are inaccessible', async () => {
|
||||||
|
const nonExistent1 = path.join(testDir, 'non-existent-1');
|
||||||
|
const nonExistent2 = path.join(testDir, 'non-existent-2');
|
||||||
|
|
||||||
|
const result = await spawnServer([nonExistent1, nonExistent2]);
|
||||||
|
|
||||||
|
// Should exit with error
|
||||||
|
expect(result.exitCode).toBe(1);
|
||||||
|
expect(result.stderr).toContain('Error: None of the specified directories are accessible');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should warn when path is not a directory', async () => {
|
||||||
|
const filePath = path.join(testDir, 'not-a-directory.txt');
|
||||||
|
await fs.writeFile(filePath, 'content');
|
||||||
|
|
||||||
|
const result = await spawnServer([filePath, accessibleDir]);
|
||||||
|
|
||||||
|
// Should warn about non-directory
|
||||||
|
expect(result.stderr).toContain('Warning:');
|
||||||
|
expect(result.stderr).toContain('not a directory');
|
||||||
|
|
||||||
|
// Should still start with the valid directory
|
||||||
|
expect(result.stderr).toContain('Secure MCP Filesystem Server running on SSE');
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -0,0 +1,158 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import * as fs from 'fs/promises';
|
||||||
|
import * as path from 'path';
|
||||||
|
import * as os from 'os';
|
||||||
|
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
|
||||||
|
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
|
||||||
|
import { spawn } from 'child_process';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Integration tests to verify that tool handlers return structuredContent
|
||||||
|
* that matches the declared outputSchema.
|
||||||
|
*
|
||||||
|
* These tests address issues #3110, #3106, #3093 where tools were returning
|
||||||
|
* structuredContent: { content: [contentBlock] } (array) instead of
|
||||||
|
* structuredContent: { content: string } as declared in outputSchema.
|
||||||
|
*/
|
||||||
|
describe('structuredContent schema compliance', () => {
|
||||||
|
let client: Client;
|
||||||
|
let transport: StdioClientTransport;
|
||||||
|
let testDir: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
// Create a temp directory for testing
|
||||||
|
testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'mcp-fs-test-'));
|
||||||
|
|
||||||
|
// Create test files
|
||||||
|
await fs.writeFile(path.join(testDir, 'test.txt'), 'test content');
|
||||||
|
await fs.mkdir(path.join(testDir, 'subdir'));
|
||||||
|
await fs.writeFile(path.join(testDir, 'subdir', 'nested.txt'), 'nested content');
|
||||||
|
|
||||||
|
// Start the MCP server
|
||||||
|
const serverPath = path.resolve(__dirname, '../dist/index.js');
|
||||||
|
transport = new StdioClientTransport({
|
||||||
|
command: 'node',
|
||||||
|
args: [serverPath, testDir],
|
||||||
|
});
|
||||||
|
|
||||||
|
client = new Client({
|
||||||
|
name: 'test-client',
|
||||||
|
version: '1.0.0',
|
||||||
|
}, {
|
||||||
|
capabilities: {}
|
||||||
|
});
|
||||||
|
|
||||||
|
await client.connect(transport);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await client?.close();
|
||||||
|
await fs.rm(testDir, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('directory_tree', () => {
|
||||||
|
it('should return structuredContent.content as a string, not an array', async () => {
|
||||||
|
const result = await client.callTool({
|
||||||
|
name: 'directory_tree',
|
||||||
|
arguments: { path: testDir }
|
||||||
|
});
|
||||||
|
|
||||||
|
// The result should have structuredContent
|
||||||
|
expect(result.structuredContent).toBeDefined();
|
||||||
|
|
||||||
|
// structuredContent.content should be a string (matching outputSchema: { content: z.string() })
|
||||||
|
const structuredContent = result.structuredContent as { content: unknown };
|
||||||
|
expect(typeof structuredContent.content).toBe('string');
|
||||||
|
|
||||||
|
// It should NOT be an array
|
||||||
|
expect(Array.isArray(structuredContent.content)).toBe(false);
|
||||||
|
|
||||||
|
// The content should be valid JSON representing the tree
|
||||||
|
const treeData = JSON.parse(structuredContent.content as string);
|
||||||
|
expect(Array.isArray(treeData)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('list_directory_with_sizes', () => {
|
||||||
|
it('should return structuredContent.content as a string, not an array', async () => {
|
||||||
|
const result = await client.callTool({
|
||||||
|
name: 'list_directory_with_sizes',
|
||||||
|
arguments: { path: testDir }
|
||||||
|
});
|
||||||
|
|
||||||
|
// The result should have structuredContent
|
||||||
|
expect(result.structuredContent).toBeDefined();
|
||||||
|
|
||||||
|
// structuredContent.content should be a string (matching outputSchema: { content: z.string() })
|
||||||
|
const structuredContent = result.structuredContent as { content: unknown };
|
||||||
|
expect(typeof structuredContent.content).toBe('string');
|
||||||
|
|
||||||
|
// It should NOT be an array
|
||||||
|
expect(Array.isArray(structuredContent.content)).toBe(false);
|
||||||
|
|
||||||
|
// The content should contain directory listing info
|
||||||
|
expect(structuredContent.content).toContain('[FILE]');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('move_file', () => {
|
||||||
|
it('should return structuredContent.content as a string, not an array', async () => {
|
||||||
|
const sourcePath = path.join(testDir, 'test.txt');
|
||||||
|
const destPath = path.join(testDir, 'moved.txt');
|
||||||
|
|
||||||
|
const result = await client.callTool({
|
||||||
|
name: 'move_file',
|
||||||
|
arguments: {
|
||||||
|
source: sourcePath,
|
||||||
|
destination: destPath
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// The result should have structuredContent
|
||||||
|
expect(result.structuredContent).toBeDefined();
|
||||||
|
|
||||||
|
// structuredContent.content should be a string (matching outputSchema: { content: z.string() })
|
||||||
|
const structuredContent = result.structuredContent as { content: unknown };
|
||||||
|
expect(typeof structuredContent.content).toBe('string');
|
||||||
|
|
||||||
|
// It should NOT be an array
|
||||||
|
expect(Array.isArray(structuredContent.content)).toBe(false);
|
||||||
|
|
||||||
|
// The content should contain success message
|
||||||
|
expect(structuredContent.content).toContain('Successfully moved');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('list_directory (control - already working)', () => {
|
||||||
|
it('should return structuredContent.content as a string', async () => {
|
||||||
|
const result = await client.callTool({
|
||||||
|
name: 'list_directory',
|
||||||
|
arguments: { path: testDir }
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.structuredContent).toBeDefined();
|
||||||
|
|
||||||
|
const structuredContent = result.structuredContent as { content: unknown };
|
||||||
|
expect(typeof structuredContent.content).toBe('string');
|
||||||
|
expect(Array.isArray(structuredContent.content)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('search_files (control - already working)', () => {
|
||||||
|
it('should return structuredContent.content as a string', async () => {
|
||||||
|
const result = await client.callTool({
|
||||||
|
name: 'search_files',
|
||||||
|
arguments: {
|
||||||
|
path: testDir,
|
||||||
|
pattern: '*.txt'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.structuredContent).toBeDefined();
|
||||||
|
|
||||||
|
const structuredContent = result.structuredContent as { content: unknown };
|
||||||
|
expect(typeof structuredContent.content).toBe('string');
|
||||||
|
expect(Array.isArray(structuredContent.content)).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
687
mcpServer/modules/filesystem/index.ts
Normal file
687
mcpServer/modules/filesystem/index.ts
Normal file
@@ -0,0 +1,687 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
||||||
|
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
|
||||||
|
import { CallToolResult } from "@modelcontextprotocol/sdk/types.js";
|
||||||
|
import fs from "fs/promises";
|
||||||
|
import { createReadStream } from "fs";
|
||||||
|
import http from "http";
|
||||||
|
import path from "path";
|
||||||
|
import { URL } from "url";
|
||||||
|
import { z } from "zod";
|
||||||
|
import { minimatch } from "minimatch";
|
||||||
|
import {
|
||||||
|
// Function imports
|
||||||
|
formatSize,
|
||||||
|
validatePath,
|
||||||
|
getFileStats,
|
||||||
|
readFileContent,
|
||||||
|
writeFileContent,
|
||||||
|
searchFilesWithValidation,
|
||||||
|
applyFileEdits,
|
||||||
|
tailFile,
|
||||||
|
headFile,
|
||||||
|
setAllowedDirectories,
|
||||||
|
} from './lib.js';
|
||||||
|
|
||||||
|
// Always use full filesystem (typical for container deployment)
|
||||||
|
const allowedDirectories: string[] = ["/"];
|
||||||
|
setAllowedDirectories(allowedDirectories);
|
||||||
|
|
||||||
|
// Schema definitions
|
||||||
|
const ReadTextFileArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
tail: z.number().optional().describe('If provided, returns only the last N lines of the file'),
|
||||||
|
head: z.number().optional().describe('If provided, returns only the first N lines of the file')
|
||||||
|
});
|
||||||
|
|
||||||
|
const ReadMediaFileArgsSchema = z.object({
|
||||||
|
path: z.string()
|
||||||
|
});
|
||||||
|
|
||||||
|
const ReadMultipleFilesArgsSchema = z.object({
|
||||||
|
paths: z
|
||||||
|
.array(z.string())
|
||||||
|
.min(1, "At least one file path must be provided")
|
||||||
|
.describe("Array of file paths to read. Each path must be a string pointing to a valid file within allowed directories."),
|
||||||
|
});
|
||||||
|
|
||||||
|
const WriteFileArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
content: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const EditOperation = z.object({
|
||||||
|
oldText: z.string().describe('Text to search for - must match exactly'),
|
||||||
|
newText: z.string().describe('Text to replace with')
|
||||||
|
});
|
||||||
|
|
||||||
|
const EditFileArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
edits: z.array(EditOperation),
|
||||||
|
dryRun: z.boolean().default(false).describe('Preview changes using git-style diff format')
|
||||||
|
});
|
||||||
|
|
||||||
|
const CreateDirectoryArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const ListDirectoryArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const ListDirectoryWithSizesArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
sortBy: z.enum(['name', 'size']).optional().default('name').describe('Sort entries by name or size'),
|
||||||
|
});
|
||||||
|
|
||||||
|
const DirectoryTreeArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
excludePatterns: z.array(z.string()).optional().default([])
|
||||||
|
});
|
||||||
|
|
||||||
|
const MoveFileArgsSchema = z.object({
|
||||||
|
source: z.string(),
|
||||||
|
destination: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const SearchFilesArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
pattern: z.string(),
|
||||||
|
excludePatterns: z.array(z.string()).optional().default([])
|
||||||
|
});
|
||||||
|
|
||||||
|
const GetFileInfoArgsSchema = z.object({
|
||||||
|
path: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Server setup
|
||||||
|
const server = new McpServer(
|
||||||
|
{
|
||||||
|
name: "secure-filesystem-server",
|
||||||
|
version: "0.2.0",
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Reads a file as a stream of buffers, concatenates them, and then encodes
|
||||||
|
// the result to a Base64 string. This is a memory-efficient way to handle
|
||||||
|
// binary data from a stream before the final encoding.
|
||||||
|
async function readFileAsBase64Stream(filePath: string): Promise<string> {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const stream = createReadStream(filePath);
|
||||||
|
const chunks: Buffer[] = [];
|
||||||
|
stream.on('data', (chunk) => {
|
||||||
|
chunks.push(chunk as Buffer);
|
||||||
|
});
|
||||||
|
stream.on('end', () => {
|
||||||
|
const finalBuffer = Buffer.concat(chunks);
|
||||||
|
resolve(finalBuffer.toString('base64'));
|
||||||
|
});
|
||||||
|
stream.on('error', (err) => reject(err));
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tool registrations
|
||||||
|
|
||||||
|
// read_file (deprecated) and read_text_file
|
||||||
|
const readTextFileHandler = async (args: z.infer<typeof ReadTextFileArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
|
||||||
|
if (args.head && args.tail) {
|
||||||
|
throw new Error("Cannot specify both head and tail parameters simultaneously");
|
||||||
|
}
|
||||||
|
|
||||||
|
let content: string;
|
||||||
|
if (args.tail) {
|
||||||
|
content = await tailFile(validPath, args.tail);
|
||||||
|
} else if (args.head) {
|
||||||
|
content = await headFile(validPath, args.head);
|
||||||
|
} else {
|
||||||
|
content = await readFileContent(validPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: content }],
|
||||||
|
structuredContent: { content }
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"read_file",
|
||||||
|
{
|
||||||
|
title: "Read File (Deprecated)",
|
||||||
|
description: "Read the complete contents of a file as text. DEPRECATED: Use read_text_file instead.",
|
||||||
|
inputSchema: ReadTextFileArgsSchema.shape,
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
readTextFileHandler
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"read_text_file",
|
||||||
|
{
|
||||||
|
title: "Read Text File",
|
||||||
|
description:
|
||||||
|
"Read the complete contents of a file from the file system as text. " +
|
||||||
|
"Handles various text encodings and provides detailed error messages " +
|
||||||
|
"if the file cannot be read. Use this tool when you need to examine " +
|
||||||
|
"the contents of a single file. Use the 'head' parameter to read only " +
|
||||||
|
"the first N lines of a file, or the 'tail' parameter to read only " +
|
||||||
|
"the last N lines of a file. Operates on the file as text regardless of extension. " +
|
||||||
|
"Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string(),
|
||||||
|
tail: z.number().optional().describe("If provided, returns only the last N lines of the file"),
|
||||||
|
head: z.number().optional().describe("If provided, returns only the first N lines of the file")
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
readTextFileHandler
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"read_media_file",
|
||||||
|
{
|
||||||
|
title: "Read Media File",
|
||||||
|
description:
|
||||||
|
"Read an image or audio file. Returns the base64 encoded data and MIME type. " +
|
||||||
|
"Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string()
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
content: z.array(z.object({
|
||||||
|
type: z.enum(["image", "audio", "blob"]),
|
||||||
|
data: z.string(),
|
||||||
|
mimeType: z.string()
|
||||||
|
}))
|
||||||
|
},
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof ReadMediaFileArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
const extension = path.extname(validPath).toLowerCase();
|
||||||
|
const mimeTypes: Record<string, string> = {
|
||||||
|
".png": "image/png",
|
||||||
|
".jpg": "image/jpeg",
|
||||||
|
".jpeg": "image/jpeg",
|
||||||
|
".gif": "image/gif",
|
||||||
|
".webp": "image/webp",
|
||||||
|
".bmp": "image/bmp",
|
||||||
|
".svg": "image/svg+xml",
|
||||||
|
".mp3": "audio/mpeg",
|
||||||
|
".wav": "audio/wav",
|
||||||
|
".ogg": "audio/ogg",
|
||||||
|
".flac": "audio/flac",
|
||||||
|
};
|
||||||
|
const mimeType = mimeTypes[extension] || "application/octet-stream";
|
||||||
|
const data = await readFileAsBase64Stream(validPath);
|
||||||
|
|
||||||
|
const type = mimeType.startsWith("image/")
|
||||||
|
? "image"
|
||||||
|
: mimeType.startsWith("audio/")
|
||||||
|
? "audio"
|
||||||
|
// Fallback for other binary types, not officially supported by the spec but has been used for some time
|
||||||
|
: "blob";
|
||||||
|
const contentItem = { type: type as 'image' | 'audio' | 'blob', data, mimeType };
|
||||||
|
return {
|
||||||
|
content: [contentItem],
|
||||||
|
structuredContent: { content: [contentItem] }
|
||||||
|
} as unknown as CallToolResult;
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"read_multiple_files",
|
||||||
|
{
|
||||||
|
title: "Read Multiple Files",
|
||||||
|
description:
|
||||||
|
"Read the contents of multiple files simultaneously. This is more " +
|
||||||
|
"efficient than reading files one by one when you need to analyze " +
|
||||||
|
"or compare multiple files. Each file's content is returned with its " +
|
||||||
|
"path as a reference. Failed reads for individual files won't stop " +
|
||||||
|
"the entire operation. Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
paths: z.array(z.string())
|
||||||
|
.min(1)
|
||||||
|
.describe("Array of file paths to read. Each path must be a string pointing to a valid file within allowed directories.")
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof ReadMultipleFilesArgsSchema>) => {
|
||||||
|
const results = await Promise.all(
|
||||||
|
args.paths.map(async (filePath: string) => {
|
||||||
|
try {
|
||||||
|
const validPath = await validatePath(filePath);
|
||||||
|
const content = await readFileContent(validPath);
|
||||||
|
return `${filePath}:\n${content}\n`;
|
||||||
|
} catch (error) {
|
||||||
|
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||||
|
return `${filePath}: Error - ${errorMessage}`;
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
const text = results.join("\n---\n");
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text }],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"write_file",
|
||||||
|
{
|
||||||
|
title: "Write File",
|
||||||
|
description:
|
||||||
|
"Create a new file or completely overwrite an existing file with new content. " +
|
||||||
|
"Use with caution as it will overwrite existing files without warning. " +
|
||||||
|
"Handles text content with proper encoding. Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string(),
|
||||||
|
content: z.string()
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: false, idempotentHint: true, destructiveHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof WriteFileArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
await writeFileContent(validPath, args.content);
|
||||||
|
const text = `Successfully wrote to ${args.path}`;
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text }],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"edit_file",
|
||||||
|
{
|
||||||
|
title: "Edit File",
|
||||||
|
description:
|
||||||
|
"Make line-based edits to a text file. Each edit replaces exact line sequences " +
|
||||||
|
"with new content. Returns a git-style diff showing the changes made. " +
|
||||||
|
"Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string(),
|
||||||
|
edits: z.array(z.object({
|
||||||
|
oldText: z.string().describe("Text to search for - must match exactly"),
|
||||||
|
newText: z.string().describe("Text to replace with")
|
||||||
|
})),
|
||||||
|
dryRun: z.boolean().default(false).describe("Preview changes using git-style diff format")
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: false, idempotentHint: false, destructiveHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof EditFileArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
const result = await applyFileEdits(validPath, args.edits, args.dryRun);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: result }],
|
||||||
|
structuredContent: { content: result }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"create_directory",
|
||||||
|
{
|
||||||
|
title: "Create Directory",
|
||||||
|
description:
|
||||||
|
"Create a new directory or ensure a directory exists. Can create multiple " +
|
||||||
|
"nested directories in one operation. If the directory already exists, " +
|
||||||
|
"this operation will succeed silently. Perfect for setting up directory " +
|
||||||
|
"structures for projects or ensuring required paths exist. Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string()
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: false, idempotentHint: true, destructiveHint: false }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof CreateDirectoryArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
await fs.mkdir(validPath, { recursive: true });
|
||||||
|
const text = `Successfully created directory ${args.path}`;
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text }],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"list_directory",
|
||||||
|
{
|
||||||
|
title: "List Directory",
|
||||||
|
description:
|
||||||
|
"Get a detailed listing of all files and directories in a specified path. " +
|
||||||
|
"Results clearly distinguish between files and directories with [FILE] and [DIR] " +
|
||||||
|
"prefixes. This tool is essential for understanding directory structure and " +
|
||||||
|
"finding specific files within a directory. Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string()
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof ListDirectoryArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
const entries = await fs.readdir(validPath, { withFileTypes: true });
|
||||||
|
const formatted = entries
|
||||||
|
.map((entry) => `${entry.isDirectory() ? "[DIR]" : "[FILE]"} ${entry.name}`)
|
||||||
|
.join("\n");
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: formatted }],
|
||||||
|
structuredContent: { content: formatted }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"list_directory_with_sizes",
|
||||||
|
{
|
||||||
|
title: "List Directory with Sizes",
|
||||||
|
description:
|
||||||
|
"Get a detailed listing of all files and directories in a specified path, including sizes. " +
|
||||||
|
"Results clearly distinguish between files and directories with [FILE] and [DIR] " +
|
||||||
|
"prefixes. This tool is useful for understanding directory structure and " +
|
||||||
|
"finding specific files within a directory. Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string(),
|
||||||
|
sortBy: z.enum(["name", "size"]).optional().default("name").describe("Sort entries by name or size")
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof ListDirectoryWithSizesArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
const entries = await fs.readdir(validPath, { withFileTypes: true });
|
||||||
|
|
||||||
|
// Get detailed information for each entry
|
||||||
|
const detailedEntries = await Promise.all(
|
||||||
|
entries.map(async (entry) => {
|
||||||
|
const entryPath = path.join(validPath, entry.name);
|
||||||
|
try {
|
||||||
|
const stats = await fs.stat(entryPath);
|
||||||
|
return {
|
||||||
|
name: entry.name,
|
||||||
|
isDirectory: entry.isDirectory(),
|
||||||
|
size: stats.size,
|
||||||
|
mtime: stats.mtime
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
return {
|
||||||
|
name: entry.name,
|
||||||
|
isDirectory: entry.isDirectory(),
|
||||||
|
size: 0,
|
||||||
|
mtime: new Date(0)
|
||||||
|
};
|
||||||
|
}
|
||||||
|
})
|
||||||
|
);
|
||||||
|
|
||||||
|
// Sort entries based on sortBy parameter
|
||||||
|
const sortedEntries = [...detailedEntries].sort((a, b) => {
|
||||||
|
if (args.sortBy === 'size') {
|
||||||
|
return b.size - a.size; // Descending by size
|
||||||
|
}
|
||||||
|
// Default sort by name
|
||||||
|
return a.name.localeCompare(b.name);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Format the output
|
||||||
|
const formattedEntries = sortedEntries.map(entry =>
|
||||||
|
`${entry.isDirectory ? "[DIR]" : "[FILE]"} ${entry.name.padEnd(30)} ${
|
||||||
|
entry.isDirectory ? "" : formatSize(entry.size).padStart(10)
|
||||||
|
}`
|
||||||
|
);
|
||||||
|
|
||||||
|
// Add summary
|
||||||
|
const totalFiles = detailedEntries.filter(e => !e.isDirectory).length;
|
||||||
|
const totalDirs = detailedEntries.filter(e => e.isDirectory).length;
|
||||||
|
const totalSize = detailedEntries.reduce((sum, entry) => sum + (entry.isDirectory ? 0 : entry.size), 0);
|
||||||
|
|
||||||
|
const summary = [
|
||||||
|
"",
|
||||||
|
`Total: ${totalFiles} files, ${totalDirs} directories`,
|
||||||
|
`Combined size: ${formatSize(totalSize)}`
|
||||||
|
];
|
||||||
|
|
||||||
|
const text = [...formattedEntries, ...summary].join("\n");
|
||||||
|
const contentBlock = { type: "text" as const, text };
|
||||||
|
return {
|
||||||
|
content: [contentBlock],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"directory_tree",
|
||||||
|
{
|
||||||
|
title: "Directory Tree",
|
||||||
|
description:
|
||||||
|
"Get a recursive tree view of files and directories as a JSON structure. " +
|
||||||
|
"Each entry includes 'name', 'type' (file/directory), and 'children' for directories. " +
|
||||||
|
"Files have no children array, while directories always have a children array (which may be empty). " +
|
||||||
|
"The output is formatted with 2-space indentation for readability. Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string(),
|
||||||
|
excludePatterns: z.array(z.string()).optional().default([])
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof DirectoryTreeArgsSchema>) => {
|
||||||
|
interface TreeEntry {
|
||||||
|
name: string;
|
||||||
|
type: 'file' | 'directory';
|
||||||
|
children?: TreeEntry[];
|
||||||
|
}
|
||||||
|
const rootPath = args.path;
|
||||||
|
|
||||||
|
async function buildTree(currentPath: string, excludePatterns: string[] = []): Promise<TreeEntry[]> {
|
||||||
|
const validPath = await validatePath(currentPath);
|
||||||
|
const entries = await fs.readdir(validPath, { withFileTypes: true });
|
||||||
|
const result: TreeEntry[] = [];
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
const relativePath = path.relative(rootPath, path.join(currentPath, entry.name));
|
||||||
|
const shouldExclude = excludePatterns.some(pattern => {
|
||||||
|
if (pattern.includes('*')) {
|
||||||
|
return minimatch(relativePath, pattern, { dot: true });
|
||||||
|
}
|
||||||
|
// For files: match exact name or as part of path
|
||||||
|
// For directories: match as directory path
|
||||||
|
return minimatch(relativePath, pattern, { dot: true }) ||
|
||||||
|
minimatch(relativePath, `**/${pattern}`, { dot: true }) ||
|
||||||
|
minimatch(relativePath, `**/${pattern}/**`, { dot: true });
|
||||||
|
});
|
||||||
|
if (shouldExclude)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
const entryData: TreeEntry = {
|
||||||
|
name: entry.name,
|
||||||
|
type: entry.isDirectory() ? 'directory' : 'file'
|
||||||
|
};
|
||||||
|
|
||||||
|
if (entry.isDirectory()) {
|
||||||
|
const subPath = path.join(currentPath, entry.name);
|
||||||
|
entryData.children = await buildTree(subPath, excludePatterns);
|
||||||
|
}
|
||||||
|
|
||||||
|
result.push(entryData);
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
const treeData = await buildTree(rootPath, args.excludePatterns);
|
||||||
|
const text = JSON.stringify(treeData, null, 2);
|
||||||
|
const contentBlock = { type: "text" as const, text };
|
||||||
|
return {
|
||||||
|
content: [contentBlock],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"move_file",
|
||||||
|
{
|
||||||
|
title: "Move File",
|
||||||
|
description:
|
||||||
|
"Move or rename files and directories. Can move files between directories " +
|
||||||
|
"and rename them in a single operation. If the destination exists, the " +
|
||||||
|
"operation will fail. Works across different directories and can be used " +
|
||||||
|
"for simple renaming within the same directory. Both source and destination must be within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
source: z.string(),
|
||||||
|
destination: z.string()
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: false, idempotentHint: false, destructiveHint: false }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof MoveFileArgsSchema>) => {
|
||||||
|
const validSourcePath = await validatePath(args.source);
|
||||||
|
const validDestPath = await validatePath(args.destination);
|
||||||
|
await fs.rename(validSourcePath, validDestPath);
|
||||||
|
const text = `Successfully moved ${args.source} to ${args.destination}`;
|
||||||
|
const contentBlock = { type: "text" as const, text };
|
||||||
|
return {
|
||||||
|
content: [contentBlock],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"search_files",
|
||||||
|
{
|
||||||
|
title: "Search Files",
|
||||||
|
description:
|
||||||
|
"Recursively search for files and directories matching a pattern. " +
|
||||||
|
"The patterns should be glob-style patterns that match paths relative to the working directory. " +
|
||||||
|
"Use pattern like '*.ext' to match files in current directory, and '**/*.ext' to match files in all subdirectories. " +
|
||||||
|
"Returns full paths to all matching items. Great for finding files when you don't know their exact location. " +
|
||||||
|
"Only searches within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string(),
|
||||||
|
pattern: z.string(),
|
||||||
|
excludePatterns: z.array(z.string()).optional().default([])
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof SearchFilesArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
const results = await searchFilesWithValidation(validPath, args.pattern, allowedDirectories, { excludePatterns: args.excludePatterns });
|
||||||
|
const text = results.length > 0 ? results.join("\n") : "No matches found";
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text }],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"get_file_info",
|
||||||
|
{
|
||||||
|
title: "Get File Info",
|
||||||
|
description:
|
||||||
|
"Retrieve detailed metadata about a file or directory. Returns comprehensive " +
|
||||||
|
"information including size, creation time, last modified time, permissions, " +
|
||||||
|
"and type. This tool is perfect for understanding file characteristics " +
|
||||||
|
"without reading the actual content. Only works within allowed directories.",
|
||||||
|
inputSchema: {
|
||||||
|
path: z.string()
|
||||||
|
},
|
||||||
|
outputSchema: { content: z.string() },
|
||||||
|
annotations: { readOnlyHint: true }
|
||||||
|
},
|
||||||
|
async (args: z.infer<typeof GetFileInfoArgsSchema>) => {
|
||||||
|
const validPath = await validatePath(args.path);
|
||||||
|
const info = await getFileStats(validPath);
|
||||||
|
const text = Object.entries(info)
|
||||||
|
.map(([key, value]) => `${key}: ${value}`)
|
||||||
|
.join("\n");
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text }],
|
||||||
|
structuredContent: { content: text }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// SSE transport session routing (sessionId -> transport)
|
||||||
|
const sseTransportsBySessionId = new Map<string, SSEServerTransport>();
|
||||||
|
|
||||||
|
function runServer() {
|
||||||
|
const port = Number(process.env.MCP_PORT ?? process.env.SSE_PORT ?? 3000);
|
||||||
|
|
||||||
|
const httpServer = http.createServer(async (req, res) => {
|
||||||
|
const url = new URL(req.url ?? "/", `http://${req.headers.host ?? "localhost"}`);
|
||||||
|
const pathname = url.pathname;
|
||||||
|
|
||||||
|
if (req.method === "GET" && (pathname === "/sse" || pathname === "/")) {
|
||||||
|
try {
|
||||||
|
const transport = new SSEServerTransport("/messages", res);
|
||||||
|
sseTransportsBySessionId.set(transport.sessionId, transport);
|
||||||
|
transport.onclose = () => {
|
||||||
|
sseTransportsBySessionId.delete(transport.sessionId);
|
||||||
|
};
|
||||||
|
// SSE heartbeat to prevent client ReadTimeout during idle (e.g. while waiting for Ollama)
|
||||||
|
const heartbeatIntervalMs = 15_000;
|
||||||
|
const heartbeatInterval = setInterval(() => {
|
||||||
|
try {
|
||||||
|
if (!res.writableEnded) {
|
||||||
|
res.write(': heartbeat\n\n');
|
||||||
|
} else {
|
||||||
|
clearInterval(heartbeatInterval);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
clearInterval(heartbeatInterval);
|
||||||
|
}
|
||||||
|
}, heartbeatIntervalMs);
|
||||||
|
res.on('close', () => clearInterval(heartbeatInterval));
|
||||||
|
await server.connect(transport);
|
||||||
|
console.error("Secure MCP Filesystem Server: new SSE client connected");
|
||||||
|
} catch (error) {
|
||||||
|
console.error("SSE connection error:", error);
|
||||||
|
if (!res.headersSent) {
|
||||||
|
res.writeHead(500).end("Internal server error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (req.method === "POST" && pathname === "/messages") {
|
||||||
|
const sessionId = url.searchParams.get("sessionId");
|
||||||
|
if (!sessionId) {
|
||||||
|
res.writeHead(400).end("Missing sessionId query parameter");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const transport = sseTransportsBySessionId.get(sessionId);
|
||||||
|
if (!transport) {
|
||||||
|
res.writeHead(404).end("Unknown session");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
await transport.handlePostMessage(req, res);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
res.writeHead(404).end("Not found");
|
||||||
|
});
|
||||||
|
|
||||||
|
httpServer.listen(port, () => {
|
||||||
|
console.error(`Secure MCP Filesystem Server running on SSE at http://localhost:${port}`);
|
||||||
|
console.error(" GET /sse – open SSE stream (then POST to /messages?sessionId=...)");
|
||||||
|
console.error(" POST /messages?sessionId=<id> – send MCP messages");
|
||||||
|
console.error(" Allowed directory: / (full filesystem)");
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
runServer();
|
||||||
415
mcpServer/modules/filesystem/lib.ts
Normal file
415
mcpServer/modules/filesystem/lib.ts
Normal file
@@ -0,0 +1,415 @@
|
|||||||
|
import fs from "fs/promises";
|
||||||
|
import path from "path";
|
||||||
|
import os from 'os';
|
||||||
|
import { randomBytes } from 'crypto';
|
||||||
|
import { diffLines, createTwoFilesPatch } from 'diff';
|
||||||
|
import { minimatch } from 'minimatch';
|
||||||
|
import { normalizePath, expandHome } from './path-utils.js';
|
||||||
|
import { isPathWithinAllowedDirectories } from './path-validation.js';
|
||||||
|
|
||||||
|
// Global allowed directories - set by the main module
|
||||||
|
let allowedDirectories: string[] = [];
|
||||||
|
|
||||||
|
// Function to set allowed directories from the main module
|
||||||
|
export function setAllowedDirectories(directories: string[]): void {
|
||||||
|
allowedDirectories = [...directories];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Function to get current allowed directories
|
||||||
|
export function getAllowedDirectories(): string[] {
|
||||||
|
return [...allowedDirectories];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Type definitions
|
||||||
|
interface FileInfo {
|
||||||
|
size: number;
|
||||||
|
created: Date;
|
||||||
|
modified: Date;
|
||||||
|
accessed: Date;
|
||||||
|
isDirectory: boolean;
|
||||||
|
isFile: boolean;
|
||||||
|
permissions: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface SearchOptions {
|
||||||
|
excludePatterns?: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface SearchResult {
|
||||||
|
path: string;
|
||||||
|
isDirectory: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pure Utility Functions
|
||||||
|
export function formatSize(bytes: number): string {
|
||||||
|
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||||
|
if (bytes === 0) return '0 B';
|
||||||
|
|
||||||
|
const i = Math.floor(Math.log(bytes) / Math.log(1024));
|
||||||
|
|
||||||
|
if (i < 0 || i === 0) return `${bytes} ${units[0]}`;
|
||||||
|
|
||||||
|
const unitIndex = Math.min(i, units.length - 1);
|
||||||
|
return `${(bytes / Math.pow(1024, unitIndex)).toFixed(2)} ${units[unitIndex]}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function normalizeLineEndings(text: string): string {
|
||||||
|
return text.replace(/\r\n/g, '\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
export function createUnifiedDiff(originalContent: string, newContent: string, filepath: string = 'file'): string {
|
||||||
|
// Ensure consistent line endings for diff
|
||||||
|
const normalizedOriginal = normalizeLineEndings(originalContent);
|
||||||
|
const normalizedNew = normalizeLineEndings(newContent);
|
||||||
|
|
||||||
|
return createTwoFilesPatch(
|
||||||
|
filepath,
|
||||||
|
filepath,
|
||||||
|
normalizedOriginal,
|
||||||
|
normalizedNew,
|
||||||
|
'original',
|
||||||
|
'modified'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper function to resolve relative paths against allowed directories
|
||||||
|
function resolveRelativePathAgainstAllowedDirectories(relativePath: string): string {
|
||||||
|
if (allowedDirectories.length === 0) {
|
||||||
|
// Fallback to process.cwd() if no allowed directories are set
|
||||||
|
return path.resolve(process.cwd(), relativePath);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to resolve relative path against each allowed directory
|
||||||
|
for (const allowedDir of allowedDirectories) {
|
||||||
|
const candidate = path.resolve(allowedDir, relativePath);
|
||||||
|
const normalizedCandidate = normalizePath(candidate);
|
||||||
|
|
||||||
|
// Check if the resulting path lies within any allowed directory
|
||||||
|
if (isPathWithinAllowedDirectories(normalizedCandidate, allowedDirectories)) {
|
||||||
|
return candidate;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If no valid resolution found, use the first allowed directory as base
|
||||||
|
// This provides a consistent fallback behavior
|
||||||
|
return path.resolve(allowedDirectories[0], relativePath);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Security & Validation Functions
|
||||||
|
export async function validatePath(requestedPath: string): Promise<string> {
|
||||||
|
const expandedPath = expandHome(requestedPath);
|
||||||
|
const absolute = path.isAbsolute(expandedPath)
|
||||||
|
? path.resolve(expandedPath)
|
||||||
|
: resolveRelativePathAgainstAllowedDirectories(expandedPath);
|
||||||
|
|
||||||
|
const normalizedRequested = normalizePath(absolute);
|
||||||
|
|
||||||
|
// Security: Check if path is within allowed directories before any file operations
|
||||||
|
const isAllowed = isPathWithinAllowedDirectories(normalizedRequested, allowedDirectories);
|
||||||
|
if (!isAllowed) {
|
||||||
|
throw new Error(`Access denied - path outside allowed directories: ${absolute} not in ${allowedDirectories.join(', ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Security: Handle symlinks by checking their real path to prevent symlink attacks
|
||||||
|
// This prevents attackers from creating symlinks that point outside allowed directories
|
||||||
|
try {
|
||||||
|
const realPath = await fs.realpath(absolute);
|
||||||
|
const normalizedReal = normalizePath(realPath);
|
||||||
|
if (!isPathWithinAllowedDirectories(normalizedReal, allowedDirectories)) {
|
||||||
|
throw new Error(`Access denied - symlink target outside allowed directories: ${realPath} not in ${allowedDirectories.join(', ')}`);
|
||||||
|
}
|
||||||
|
return realPath;
|
||||||
|
} catch (error) {
|
||||||
|
// Security: For new files that don't exist yet, verify parent directory
|
||||||
|
// This ensures we can't create files in unauthorized locations
|
||||||
|
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
|
||||||
|
const parentDir = path.dirname(absolute);
|
||||||
|
try {
|
||||||
|
const realParentPath = await fs.realpath(parentDir);
|
||||||
|
const normalizedParent = normalizePath(realParentPath);
|
||||||
|
if (!isPathWithinAllowedDirectories(normalizedParent, allowedDirectories)) {
|
||||||
|
throw new Error(`Access denied - parent directory outside allowed directories: ${realParentPath} not in ${allowedDirectories.join(', ')}`);
|
||||||
|
}
|
||||||
|
return absolute;
|
||||||
|
} catch {
|
||||||
|
throw new Error(`Parent directory does not exist: ${parentDir}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// File Operations
|
||||||
|
export async function getFileStats(filePath: string): Promise<FileInfo> {
|
||||||
|
const stats = await fs.stat(filePath);
|
||||||
|
return {
|
||||||
|
size: stats.size,
|
||||||
|
created: stats.birthtime,
|
||||||
|
modified: stats.mtime,
|
||||||
|
accessed: stats.atime,
|
||||||
|
isDirectory: stats.isDirectory(),
|
||||||
|
isFile: stats.isFile(),
|
||||||
|
permissions: stats.mode.toString(8).slice(-3),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function readFileContent(filePath: string, encoding: string = 'utf-8'): Promise<string> {
|
||||||
|
return await fs.readFile(filePath, encoding as BufferEncoding);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function writeFileContent(filePath: string, content: string): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Security: 'wx' flag ensures exclusive creation - fails if file/symlink exists,
|
||||||
|
// preventing writes through pre-existing symlinks
|
||||||
|
await fs.writeFile(filePath, content, { encoding: "utf-8", flag: 'wx' });
|
||||||
|
} catch (error) {
|
||||||
|
if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
|
||||||
|
// Security: Use atomic rename to prevent race conditions where symlinks
|
||||||
|
// could be created between validation and write. Rename operations
|
||||||
|
// replace the target file atomically and don't follow symlinks.
|
||||||
|
const tempPath = `${filePath}.${randomBytes(16).toString('hex')}.tmp`;
|
||||||
|
try {
|
||||||
|
await fs.writeFile(tempPath, content, 'utf-8');
|
||||||
|
await fs.rename(tempPath, filePath);
|
||||||
|
} catch (renameError) {
|
||||||
|
try {
|
||||||
|
await fs.unlink(tempPath);
|
||||||
|
} catch {}
|
||||||
|
throw renameError;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// File Editing Functions
|
||||||
|
interface FileEdit {
|
||||||
|
oldText: string;
|
||||||
|
newText: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function applyFileEdits(
|
||||||
|
filePath: string,
|
||||||
|
edits: FileEdit[],
|
||||||
|
dryRun: boolean = false
|
||||||
|
): Promise<string> {
|
||||||
|
// Read file content and normalize line endings
|
||||||
|
const content = normalizeLineEndings(await fs.readFile(filePath, 'utf-8'));
|
||||||
|
|
||||||
|
// Apply edits sequentially
|
||||||
|
let modifiedContent = content;
|
||||||
|
for (const edit of edits) {
|
||||||
|
const normalizedOld = normalizeLineEndings(edit.oldText);
|
||||||
|
const normalizedNew = normalizeLineEndings(edit.newText);
|
||||||
|
|
||||||
|
// If exact match exists, use it
|
||||||
|
if (modifiedContent.includes(normalizedOld)) {
|
||||||
|
modifiedContent = modifiedContent.replace(normalizedOld, normalizedNew);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Otherwise, try line-by-line matching with flexibility for whitespace
|
||||||
|
const oldLines = normalizedOld.split('\n');
|
||||||
|
const contentLines = modifiedContent.split('\n');
|
||||||
|
let matchFound = false;
|
||||||
|
|
||||||
|
for (let i = 0; i <= contentLines.length - oldLines.length; i++) {
|
||||||
|
const potentialMatch = contentLines.slice(i, i + oldLines.length);
|
||||||
|
|
||||||
|
// Compare lines with normalized whitespace
|
||||||
|
const isMatch = oldLines.every((oldLine, j) => {
|
||||||
|
const contentLine = potentialMatch[j];
|
||||||
|
return oldLine.trim() === contentLine.trim();
|
||||||
|
});
|
||||||
|
|
||||||
|
if (isMatch) {
|
||||||
|
// Preserve original indentation of first line
|
||||||
|
const originalIndent = contentLines[i].match(/^\s*/)?.[0] || '';
|
||||||
|
const newLines = normalizedNew.split('\n').map((line, j) => {
|
||||||
|
if (j === 0) return originalIndent + line.trimStart();
|
||||||
|
// For subsequent lines, try to preserve relative indentation
|
||||||
|
const oldIndent = oldLines[j]?.match(/^\s*/)?.[0] || '';
|
||||||
|
const newIndent = line.match(/^\s*/)?.[0] || '';
|
||||||
|
if (oldIndent && newIndent) {
|
||||||
|
const relativeIndent = newIndent.length - oldIndent.length;
|
||||||
|
return originalIndent + ' '.repeat(Math.max(0, relativeIndent)) + line.trimStart();
|
||||||
|
}
|
||||||
|
return line;
|
||||||
|
});
|
||||||
|
|
||||||
|
contentLines.splice(i, oldLines.length, ...newLines);
|
||||||
|
modifiedContent = contentLines.join('\n');
|
||||||
|
matchFound = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!matchFound) {
|
||||||
|
throw new Error(`Could not find exact match for edit:\n${edit.oldText}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create unified diff
|
||||||
|
const diff = createUnifiedDiff(content, modifiedContent, filePath);
|
||||||
|
|
||||||
|
// Format diff with appropriate number of backticks
|
||||||
|
let numBackticks = 3;
|
||||||
|
while (diff.includes('`'.repeat(numBackticks))) {
|
||||||
|
numBackticks++;
|
||||||
|
}
|
||||||
|
const formattedDiff = `${'`'.repeat(numBackticks)}diff\n${diff}${'`'.repeat(numBackticks)}\n\n`;
|
||||||
|
|
||||||
|
if (!dryRun) {
|
||||||
|
// Security: Use atomic rename to prevent race conditions where symlinks
|
||||||
|
// could be created between validation and write. Rename operations
|
||||||
|
// replace the target file atomically and don't follow symlinks.
|
||||||
|
const tempPath = `${filePath}.${randomBytes(16).toString('hex')}.tmp`;
|
||||||
|
try {
|
||||||
|
await fs.writeFile(tempPath, modifiedContent, 'utf-8');
|
||||||
|
await fs.rename(tempPath, filePath);
|
||||||
|
} catch (error) {
|
||||||
|
try {
|
||||||
|
await fs.unlink(tempPath);
|
||||||
|
} catch {}
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return formattedDiff;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Memory-efficient implementation to get the last N lines of a file
|
||||||
|
export async function tailFile(filePath: string, numLines: number): Promise<string> {
|
||||||
|
const CHUNK_SIZE = 1024; // Read 1KB at a time
|
||||||
|
const stats = await fs.stat(filePath);
|
||||||
|
const fileSize = stats.size;
|
||||||
|
|
||||||
|
if (fileSize === 0) return '';
|
||||||
|
|
||||||
|
// Open file for reading
|
||||||
|
const fileHandle = await fs.open(filePath, 'r');
|
||||||
|
try {
|
||||||
|
const lines: string[] = [];
|
||||||
|
let position = fileSize;
|
||||||
|
let chunk = Buffer.alloc(CHUNK_SIZE);
|
||||||
|
let linesFound = 0;
|
||||||
|
let remainingText = '';
|
||||||
|
|
||||||
|
// Read chunks from the end of the file until we have enough lines
|
||||||
|
while (position > 0 && linesFound < numLines) {
|
||||||
|
const size = Math.min(CHUNK_SIZE, position);
|
||||||
|
position -= size;
|
||||||
|
|
||||||
|
const { bytesRead } = await fileHandle.read(chunk, 0, size, position);
|
||||||
|
if (!bytesRead) break;
|
||||||
|
|
||||||
|
// Get the chunk as a string and prepend any remaining text from previous iteration
|
||||||
|
const readData = chunk.slice(0, bytesRead).toString('utf-8');
|
||||||
|
const chunkText = readData + remainingText;
|
||||||
|
|
||||||
|
// Split by newlines and count
|
||||||
|
const chunkLines = normalizeLineEndings(chunkText).split('\n');
|
||||||
|
|
||||||
|
// If this isn't the end of the file, the first line is likely incomplete
|
||||||
|
// Save it to prepend to the next chunk
|
||||||
|
if (position > 0) {
|
||||||
|
remainingText = chunkLines[0];
|
||||||
|
chunkLines.shift(); // Remove the first (incomplete) line
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add lines to our result (up to the number we need)
|
||||||
|
for (let i = chunkLines.length - 1; i >= 0 && linesFound < numLines; i--) {
|
||||||
|
lines.unshift(chunkLines[i]);
|
||||||
|
linesFound++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
} finally {
|
||||||
|
await fileHandle.close();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// New function to get the first N lines of a file
|
||||||
|
export async function headFile(filePath: string, numLines: number): Promise<string> {
|
||||||
|
const fileHandle = await fs.open(filePath, 'r');
|
||||||
|
try {
|
||||||
|
const lines: string[] = [];
|
||||||
|
let buffer = '';
|
||||||
|
let bytesRead = 0;
|
||||||
|
const chunk = Buffer.alloc(1024); // 1KB buffer
|
||||||
|
|
||||||
|
// Read chunks and count lines until we have enough or reach EOF
|
||||||
|
while (lines.length < numLines) {
|
||||||
|
const result = await fileHandle.read(chunk, 0, chunk.length, bytesRead);
|
||||||
|
if (result.bytesRead === 0) break; // End of file
|
||||||
|
bytesRead += result.bytesRead;
|
||||||
|
buffer += chunk.slice(0, result.bytesRead).toString('utf-8');
|
||||||
|
|
||||||
|
const newLineIndex = buffer.lastIndexOf('\n');
|
||||||
|
if (newLineIndex !== -1) {
|
||||||
|
const completeLines = buffer.slice(0, newLineIndex).split('\n');
|
||||||
|
buffer = buffer.slice(newLineIndex + 1);
|
||||||
|
for (const line of completeLines) {
|
||||||
|
lines.push(line);
|
||||||
|
if (lines.length >= numLines) break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If there is leftover content and we still need lines, add it
|
||||||
|
if (buffer.length > 0 && lines.length < numLines) {
|
||||||
|
lines.push(buffer);
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
} finally {
|
||||||
|
await fileHandle.close();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function searchFilesWithValidation(
|
||||||
|
rootPath: string,
|
||||||
|
pattern: string,
|
||||||
|
allowedDirectories: string[],
|
||||||
|
options: SearchOptions = {}
|
||||||
|
): Promise<string[]> {
|
||||||
|
const { excludePatterns = [] } = options;
|
||||||
|
const results: string[] = [];
|
||||||
|
|
||||||
|
async function search(currentPath: string) {
|
||||||
|
const entries = await fs.readdir(currentPath, { withFileTypes: true });
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
const fullPath = path.join(currentPath, entry.name);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await validatePath(fullPath);
|
||||||
|
|
||||||
|
const relativePath = path.relative(rootPath, fullPath);
|
||||||
|
const shouldExclude = excludePatterns.some(excludePattern =>
|
||||||
|
minimatch(relativePath, excludePattern, { dot: true })
|
||||||
|
);
|
||||||
|
|
||||||
|
if (shouldExclude) continue;
|
||||||
|
|
||||||
|
// Use glob matching for the search pattern
|
||||||
|
if (minimatch(relativePath, pattern, { dot: true })) {
|
||||||
|
results.push(fullPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (entry.isDirectory()) {
|
||||||
|
await search(fullPath);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
await search(rootPath);
|
||||||
|
return results;
|
||||||
|
}
|
||||||
43
mcpServer/modules/filesystem/package.json
Normal file
43
mcpServer/modules/filesystem/package.json
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
{
|
||||||
|
"name": "@modelcontextprotocol/server-filesystem",
|
||||||
|
"version": "0.6.3",
|
||||||
|
"description": "MCP server for filesystem access",
|
||||||
|
"license": "SEE LICENSE IN LICENSE",
|
||||||
|
"mcpName": "io.github.modelcontextprotocol/server-filesystem",
|
||||||
|
"author": "Model Context Protocol a Series of LF Projects, LLC.",
|
||||||
|
"homepage": "https://modelcontextprotocol.io",
|
||||||
|
"bugs": "https://github.com/modelcontextprotocol/servers/issues",
|
||||||
|
"repository": {
|
||||||
|
"type": "git",
|
||||||
|
"url": "https://github.com/modelcontextprotocol/servers.git"
|
||||||
|
},
|
||||||
|
"type": "module",
|
||||||
|
"bin": {
|
||||||
|
"mcp-server-filesystem": "dist/index.js"
|
||||||
|
},
|
||||||
|
"files": [
|
||||||
|
"dist"
|
||||||
|
],
|
||||||
|
"scripts": {
|
||||||
|
"build": "tsc && shx chmod +x dist/*.js",
|
||||||
|
"prepare": "npm run build",
|
||||||
|
"watch": "tsc --watch",
|
||||||
|
"test": "vitest run --coverage"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"@modelcontextprotocol/sdk": "^1.26.0",
|
||||||
|
"diff": "^8.0.3",
|
||||||
|
"glob": "^10.5.0",
|
||||||
|
"minimatch": "^10.0.1",
|
||||||
|
"zod-to-json-schema": "^3.23.5"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/diff": "^5.0.9",
|
||||||
|
"@types/minimatch": "^5.1.2",
|
||||||
|
"@types/node": "^22",
|
||||||
|
"@vitest/coverage-v8": "^2.1.8",
|
||||||
|
"shx": "^0.3.4",
|
||||||
|
"typescript": "^5.8.2",
|
||||||
|
"vitest": "^2.1.8"
|
||||||
|
}
|
||||||
|
}
|
||||||
118
mcpServer/modules/filesystem/path-utils.ts
Normal file
118
mcpServer/modules/filesystem/path-utils.ts
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
import path from "path";
|
||||||
|
import os from 'os';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Converts WSL or Unix-style Windows paths to Windows format
|
||||||
|
* @param p The path to convert
|
||||||
|
* @returns Converted Windows path
|
||||||
|
*/
|
||||||
|
export function convertToWindowsPath(p: string): string {
|
||||||
|
// Handle WSL paths (/mnt/c/...)
|
||||||
|
// NEVER convert WSL paths - they are valid Linux paths that work with Node.js fs operations in WSL
|
||||||
|
// Converting them to Windows format (C:\...) breaks fs operations inside WSL
|
||||||
|
if (p.startsWith('/mnt/')) {
|
||||||
|
return p; // Leave WSL paths unchanged
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle Unix-style Windows paths (/c/...)
|
||||||
|
// Only convert when running on Windows
|
||||||
|
if (p.match(/^\/[a-zA-Z]\//) && process.platform === 'win32') {
|
||||||
|
const driveLetter = p.charAt(1).toUpperCase();
|
||||||
|
const pathPart = p.slice(2).replace(/\//g, '\\');
|
||||||
|
return `${driveLetter}:${pathPart}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle standard Windows paths, ensuring backslashes
|
||||||
|
if (p.match(/^[a-zA-Z]:/)) {
|
||||||
|
return p.replace(/\//g, '\\');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Leave non-Windows paths unchanged
|
||||||
|
return p;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Normalizes path by standardizing format while preserving OS-specific behavior
|
||||||
|
* @param p The path to normalize
|
||||||
|
* @returns Normalized path
|
||||||
|
*/
|
||||||
|
export function normalizePath(p: string): string {
|
||||||
|
// Remove any surrounding quotes and whitespace
|
||||||
|
p = p.trim().replace(/^["']|["']$/g, '');
|
||||||
|
|
||||||
|
// Check if this is a Unix path that should not be converted
|
||||||
|
// WSL paths (/mnt/) should ALWAYS be preserved as they work correctly in WSL with Node.js fs
|
||||||
|
// Regular Unix paths should also be preserved
|
||||||
|
const isUnixPath = p.startsWith('/') && (
|
||||||
|
// Always preserve WSL paths (/mnt/c/, /mnt/d/, etc.)
|
||||||
|
p.match(/^\/mnt\/[a-z]\//i) ||
|
||||||
|
// On non-Windows platforms, treat all absolute paths as Unix paths
|
||||||
|
(process.platform !== 'win32') ||
|
||||||
|
// On Windows, preserve Unix paths that aren't Unix-style Windows paths (/c/, /d/, etc.)
|
||||||
|
(process.platform === 'win32' && !p.match(/^\/[a-zA-Z]\//))
|
||||||
|
);
|
||||||
|
|
||||||
|
if (isUnixPath) {
|
||||||
|
// For Unix paths, just normalize without converting to Windows format
|
||||||
|
// Replace double slashes with single slashes and remove trailing slashes
|
||||||
|
return p.replace(/\/+/g, '/').replace(/(?<!^)\/$/, '');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert Unix-style Windows paths (/c/, /d/) to Windows format if on Windows
|
||||||
|
// This function will now leave /mnt/ paths unchanged
|
||||||
|
p = convertToWindowsPath(p);
|
||||||
|
|
||||||
|
// Handle double backslashes, preserving leading UNC \\
|
||||||
|
if (p.startsWith('\\\\')) {
|
||||||
|
// For UNC paths, first normalize any excessive leading backslashes to exactly \\
|
||||||
|
// Then normalize double backslashes in the rest of the path
|
||||||
|
let uncPath = p;
|
||||||
|
// Replace multiple leading backslashes with exactly two
|
||||||
|
uncPath = uncPath.replace(/^\\{2,}/, '\\\\');
|
||||||
|
// Now normalize any remaining double backslashes in the rest of the path
|
||||||
|
const restOfPath = uncPath.substring(2).replace(/\\\\/g, '\\');
|
||||||
|
p = '\\\\' + restOfPath;
|
||||||
|
} else {
|
||||||
|
// For non-UNC paths, normalize all double backslashes
|
||||||
|
p = p.replace(/\\\\/g, '\\');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use Node's path normalization, which handles . and .. segments
|
||||||
|
let normalized = path.normalize(p);
|
||||||
|
|
||||||
|
// Fix UNC paths after normalization (path.normalize can remove a leading backslash)
|
||||||
|
if (p.startsWith('\\\\') && !normalized.startsWith('\\\\')) {
|
||||||
|
normalized = '\\' + normalized;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle Windows paths: convert slashes and ensure drive letter is capitalized
|
||||||
|
if (normalized.match(/^[a-zA-Z]:/)) {
|
||||||
|
let result = normalized.replace(/\//g, '\\');
|
||||||
|
// Capitalize drive letter if present
|
||||||
|
if (/^[a-z]:/.test(result)) {
|
||||||
|
result = result.charAt(0).toUpperCase() + result.slice(1);
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// On Windows, convert forward slashes to backslashes for relative paths
|
||||||
|
// On Linux/Unix, preserve forward slashes
|
||||||
|
if (process.platform === 'win32') {
|
||||||
|
return normalized.replace(/\//g, '\\');
|
||||||
|
}
|
||||||
|
|
||||||
|
// On non-Windows platforms, keep the normalized path as-is
|
||||||
|
return normalized;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Expands home directory tildes in paths
|
||||||
|
* @param filepath The path to expand
|
||||||
|
* @returns Expanded path
|
||||||
|
*/
|
||||||
|
export function expandHome(filepath: string): string {
|
||||||
|
if (filepath.startsWith('~/') || filepath === '~') {
|
||||||
|
return path.join(os.homedir(), filepath.slice(1));
|
||||||
|
}
|
||||||
|
return filepath;
|
||||||
|
}
|
||||||
86
mcpServer/modules/filesystem/path-validation.ts
Normal file
86
mcpServer/modules/filesystem/path-validation.ts
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
import path from 'path';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Checks if an absolute path is within any of the allowed directories.
|
||||||
|
*
|
||||||
|
* @param absolutePath - The absolute path to check (will be normalized)
|
||||||
|
* @param allowedDirectories - Array of absolute allowed directory paths (will be normalized)
|
||||||
|
* @returns true if the path is within an allowed directory, false otherwise
|
||||||
|
* @throws Error if given relative paths after normalization
|
||||||
|
*/
|
||||||
|
export function isPathWithinAllowedDirectories(absolutePath: string, allowedDirectories: string[]): boolean {
|
||||||
|
// Type validation
|
||||||
|
if (typeof absolutePath !== 'string' || !Array.isArray(allowedDirectories)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reject empty inputs
|
||||||
|
if (!absolutePath || allowedDirectories.length === 0) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reject null bytes (forbidden in paths)
|
||||||
|
if (absolutePath.includes('\x00')) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize the input path
|
||||||
|
let normalizedPath: string;
|
||||||
|
try {
|
||||||
|
normalizedPath = path.resolve(path.normalize(absolutePath));
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify it's absolute after normalization
|
||||||
|
if (!path.isAbsolute(normalizedPath)) {
|
||||||
|
throw new Error('Path must be absolute after normalization');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check against each allowed directory
|
||||||
|
return allowedDirectories.some(dir => {
|
||||||
|
if (typeof dir !== 'string' || !dir) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reject null bytes in allowed dirs
|
||||||
|
if (dir.includes('\x00')) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize the allowed directory
|
||||||
|
let normalizedDir: string;
|
||||||
|
try {
|
||||||
|
normalizedDir = path.resolve(path.normalize(dir));
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify allowed directory is absolute after normalization
|
||||||
|
if (!path.isAbsolute(normalizedDir)) {
|
||||||
|
throw new Error('Allowed directories must be absolute paths after normalization');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if normalizedPath is within normalizedDir
|
||||||
|
// Path is inside if it's the same or a subdirectory
|
||||||
|
if (normalizedPath === normalizedDir) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Special case for root directory to avoid double slash
|
||||||
|
// On Windows, we need to check if both paths are on the same drive
|
||||||
|
if (normalizedDir === path.sep) {
|
||||||
|
return normalizedPath.startsWith(path.sep);
|
||||||
|
}
|
||||||
|
|
||||||
|
// On Windows, also check for drive root (e.g., "C:\")
|
||||||
|
if (path.sep === '\\' && normalizedDir.match(/^[A-Za-z]:\\?$/)) {
|
||||||
|
// Ensure both paths are on the same drive
|
||||||
|
const dirDrive = normalizedDir.charAt(0).toLowerCase();
|
||||||
|
const pathDrive = normalizedPath.charAt(0).toLowerCase();
|
||||||
|
return pathDrive === dirDrive && normalizedPath.startsWith(normalizedDir.replace(/\\?$/, '\\'));
|
||||||
|
}
|
||||||
|
|
||||||
|
return normalizedPath.startsWith(normalizedDir + path.sep);
|
||||||
|
});
|
||||||
|
}
|
||||||
76
mcpServer/modules/filesystem/roots-utils.ts
Normal file
76
mcpServer/modules/filesystem/roots-utils.ts
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
import { promises as fs, type Stats } from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import os from 'os';
|
||||||
|
import { normalizePath } from './path-utils.js';
|
||||||
|
import type { Root } from '@modelcontextprotocol/sdk/types.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Converts a root URI to a normalized directory path with basic security validation.
|
||||||
|
* @param rootUri - File URI (file://...) or plain directory path
|
||||||
|
* @returns Promise resolving to validated path or null if invalid
|
||||||
|
*/
|
||||||
|
async function parseRootUri(rootUri: string): Promise<string | null> {
|
||||||
|
try {
|
||||||
|
const rawPath = rootUri.startsWith('file://') ? rootUri.slice(7) : rootUri;
|
||||||
|
const expandedPath = rawPath.startsWith('~/') || rawPath === '~'
|
||||||
|
? path.join(os.homedir(), rawPath.slice(1))
|
||||||
|
: rawPath;
|
||||||
|
const absolutePath = path.resolve(expandedPath);
|
||||||
|
const resolvedPath = await fs.realpath(absolutePath);
|
||||||
|
return normalizePath(resolvedPath);
|
||||||
|
} catch {
|
||||||
|
return null; // Path doesn't exist or other error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Formats error message for directory validation failures.
|
||||||
|
* @param dir - Directory path that failed validation
|
||||||
|
* @param error - Error that occurred during validation
|
||||||
|
* @param reason - Specific reason for failure
|
||||||
|
* @returns Formatted error message
|
||||||
|
*/
|
||||||
|
function formatDirectoryError(dir: string, error?: unknown, reason?: string): string {
|
||||||
|
if (reason) {
|
||||||
|
return `Skipping ${reason}: ${dir}`;
|
||||||
|
}
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
|
return `Skipping invalid directory: ${dir} due to error: ${message}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Resolves requested root directories from MCP root specifications.
|
||||||
|
*
|
||||||
|
* Converts root URI specifications (file:// URIs or plain paths) into normalized
|
||||||
|
* directory paths, validating that each path exists and is a directory.
|
||||||
|
* Includes symlink resolution for security.
|
||||||
|
*
|
||||||
|
* @param requestedRoots - Array of root specifications with URI and optional name
|
||||||
|
* @returns Promise resolving to array of validated directory paths
|
||||||
|
*/
|
||||||
|
export async function getValidRootDirectories(
|
||||||
|
requestedRoots: readonly Root[]
|
||||||
|
): Promise<string[]> {
|
||||||
|
const validatedDirectories: string[] = [];
|
||||||
|
|
||||||
|
for (const requestedRoot of requestedRoots) {
|
||||||
|
const resolvedPath = await parseRootUri(requestedRoot.uri);
|
||||||
|
if (!resolvedPath) {
|
||||||
|
console.error(formatDirectoryError(requestedRoot.uri, undefined, 'invalid path or inaccessible'));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const stats: Stats = await fs.stat(resolvedPath);
|
||||||
|
if (stats.isDirectory()) {
|
||||||
|
validatedDirectories.push(resolvedPath);
|
||||||
|
} else {
|
||||||
|
console.error(formatDirectoryError(resolvedPath, undefined, 'non-directory root'));
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error(formatDirectoryError(resolvedPath, error));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return validatedDirectories;
|
||||||
|
}
|
||||||
17
mcpServer/modules/filesystem/tsconfig.json
Normal file
17
mcpServer/modules/filesystem/tsconfig.json
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"outDir": "./dist",
|
||||||
|
"rootDir": ".",
|
||||||
|
"moduleResolution": "NodeNext",
|
||||||
|
"module": "NodeNext"
|
||||||
|
},
|
||||||
|
"include": [
|
||||||
|
"./**/*.ts"
|
||||||
|
],
|
||||||
|
"exclude": [
|
||||||
|
"**/__tests__/**",
|
||||||
|
"**/*.test.ts",
|
||||||
|
"**/*.spec.ts",
|
||||||
|
"vitest.config.ts"
|
||||||
|
]
|
||||||
|
}
|
||||||
14
mcpServer/modules/filesystem/vitest.config.ts
Normal file
14
mcpServer/modules/filesystem/vitest.config.ts
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
import { defineConfig } from 'vitest/config';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
test: {
|
||||||
|
globals: true,
|
||||||
|
environment: 'node',
|
||||||
|
include: ['**/__tests__/**/*.test.ts'],
|
||||||
|
coverage: {
|
||||||
|
provider: 'v8',
|
||||||
|
include: ['**/*.ts'],
|
||||||
|
exclude: ['**/__tests__/**', '**/dist/**'],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
24
mcpServer/modules/memory/Dockerfile
Normal file
24
mcpServer/modules/memory/Dockerfile
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
FROM node:22.12-alpine AS builder
|
||||||
|
|
||||||
|
COPY . /app
|
||||||
|
COPY tsconfig.json /tsconfig.json
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN npm install
|
||||||
|
|
||||||
|
RUN npm ci --ignore-scripts --omit-dev
|
||||||
|
|
||||||
|
FROM node:22-alpine AS release
|
||||||
|
|
||||||
|
COPY --from=builder /app/dist /app/dist
|
||||||
|
COPY --from=builder /app/package.json /app/package.json
|
||||||
|
COPY --from=builder /app/package-lock.json /app/package-lock.json
|
||||||
|
|
||||||
|
ENV NODE_ENV=production
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN npm ci --ignore-scripts --omit-dev
|
||||||
|
|
||||||
|
ENTRYPOINT ["node", "dist/index.js"]
|
||||||
283
mcpServer/modules/memory/README.md
Normal file
283
mcpServer/modules/memory/README.md
Normal file
@@ -0,0 +1,283 @@
|
|||||||
|
# Knowledge Graph Memory Server
|
||||||
|
|
||||||
|
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
|
||||||
|
|
||||||
|
## Core Concepts
|
||||||
|
|
||||||
|
### Entities
|
||||||
|
Entities are the primary nodes in the knowledge graph. Each entity has:
|
||||||
|
- A unique name (identifier)
|
||||||
|
- An entity type (e.g., "person", "organization", "event")
|
||||||
|
- A list of observations
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "John_Smith",
|
||||||
|
"entityType": "person",
|
||||||
|
"observations": ["Speaks fluent Spanish"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Relations
|
||||||
|
Relations define directed connections between entities. They are always stored in active voice and describe how entities interact or relate to each other.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"from": "John_Smith",
|
||||||
|
"to": "Anthropic",
|
||||||
|
"relationType": "works_at"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
### Observations
|
||||||
|
Observations are discrete pieces of information about an entity. They are:
|
||||||
|
|
||||||
|
- Stored as strings
|
||||||
|
- Attached to specific entities
|
||||||
|
- Can be added or removed independently
|
||||||
|
- Should be atomic (one fact per observation)
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"entityName": "John_Smith",
|
||||||
|
"observations": [
|
||||||
|
"Speaks fluent Spanish",
|
||||||
|
"Graduated in 2019",
|
||||||
|
"Prefers morning meetings"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
### Tools
|
||||||
|
- **create_entities**
|
||||||
|
- Create multiple new entities in the knowledge graph
|
||||||
|
- Input: `entities` (array of objects)
|
||||||
|
- Each object contains:
|
||||||
|
- `name` (string): Entity identifier
|
||||||
|
- `entityType` (string): Type classification
|
||||||
|
- `observations` (string[]): Associated observations
|
||||||
|
- Ignores entities with existing names
|
||||||
|
|
||||||
|
- **create_relations**
|
||||||
|
- Create multiple new relations between entities
|
||||||
|
- Input: `relations` (array of objects)
|
||||||
|
- Each object contains:
|
||||||
|
- `from` (string): Source entity name
|
||||||
|
- `to` (string): Target entity name
|
||||||
|
- `relationType` (string): Relationship type in active voice
|
||||||
|
- Skips duplicate relations
|
||||||
|
|
||||||
|
- **add_observations**
|
||||||
|
- Add new observations to existing entities
|
||||||
|
- Input: `observations` (array of objects)
|
||||||
|
- Each object contains:
|
||||||
|
- `entityName` (string): Target entity
|
||||||
|
- `contents` (string[]): New observations to add
|
||||||
|
- Returns added observations per entity
|
||||||
|
- Fails if entity doesn't exist
|
||||||
|
|
||||||
|
- **delete_entities**
|
||||||
|
- Remove entities and their relations
|
||||||
|
- Input: `entityNames` (string[])
|
||||||
|
- Cascading deletion of associated relations
|
||||||
|
- Silent operation if entity doesn't exist
|
||||||
|
|
||||||
|
- **delete_observations**
|
||||||
|
- Remove specific observations from entities
|
||||||
|
- Input: `deletions` (array of objects)
|
||||||
|
- Each object contains:
|
||||||
|
- `entityName` (string): Target entity
|
||||||
|
- `observations` (string[]): Observations to remove
|
||||||
|
- Silent operation if observation doesn't exist
|
||||||
|
|
||||||
|
- **delete_relations**
|
||||||
|
- Remove specific relations from the graph
|
||||||
|
- Input: `relations` (array of objects)
|
||||||
|
- Each object contains:
|
||||||
|
- `from` (string): Source entity name
|
||||||
|
- `to` (string): Target entity name
|
||||||
|
- `relationType` (string): Relationship type
|
||||||
|
- Silent operation if relation doesn't exist
|
||||||
|
|
||||||
|
- **read_graph**
|
||||||
|
- Read the entire knowledge graph
|
||||||
|
- No input required
|
||||||
|
- Returns complete graph structure with all entities and relations
|
||||||
|
|
||||||
|
- **search_nodes**
|
||||||
|
- Search for nodes based on query
|
||||||
|
- Input: `query` (string)
|
||||||
|
- Searches across:
|
||||||
|
- Entity names
|
||||||
|
- Entity types
|
||||||
|
- Observation content
|
||||||
|
- Returns matching entities and their relations
|
||||||
|
|
||||||
|
- **open_nodes**
|
||||||
|
- Retrieve specific nodes by name
|
||||||
|
- Input: `names` (string[])
|
||||||
|
- Returns:
|
||||||
|
- Requested entities
|
||||||
|
- Relations between requested entities
|
||||||
|
- Silently skips non-existent nodes
|
||||||
|
|
||||||
|
# Usage with Claude Desktop
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
Add this to your claude_desktop_config.json:
|
||||||
|
|
||||||
|
#### Docker
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"memory": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": ["run", "-i", "-v", "claude-memory:/app/dist", "--rm", "mcp/memory"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### NPX
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"memory": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"@modelcontextprotocol/server-memory"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### NPX with custom setting
|
||||||
|
|
||||||
|
The server can be configured using the following environment variables:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"memory": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"@modelcontextprotocol/server-memory"
|
||||||
|
],
|
||||||
|
"env": {
|
||||||
|
"MEMORY_FILE_PATH": "/path/to/custom/memory.jsonl"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `MEMORY_FILE_PATH`: Path to the memory storage JSONL file (default: `memory.jsonl` in the server directory)
|
||||||
|
|
||||||
|
# VS Code Installation Instructions
|
||||||
|
|
||||||
|
For quick installation, use one of the one-click installation buttons below:
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=memory&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40modelcontextprotocol%2Fserver-memory%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=memory&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40modelcontextprotocol%2Fserver-memory%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=memory&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22-v%22%2C%22claude-memory%3A%2Fapp%2Fdist%22%2C%22--rm%22%2C%22mcp%2Fmemory%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=memory&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-i%22%2C%22-v%22%2C%22claude-memory%3A%2Fapp%2Fdist%22%2C%22--rm%22%2C%22mcp%2Fmemory%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
For manual installation, you can configure the MCP server using one of these methods:
|
||||||
|
|
||||||
|
**Method 1: User Configuration (Recommended)**
|
||||||
|
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
|
||||||
|
|
||||||
|
**Method 2: Workspace Configuration**
|
||||||
|
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
|
||||||
|
|
||||||
|
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/customization/mcp-servers).
|
||||||
|
|
||||||
|
#### NPX
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"memory": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"@modelcontextprotocol/server-memory"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Docker
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"memory": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": [
|
||||||
|
"run",
|
||||||
|
"-i",
|
||||||
|
"-v",
|
||||||
|
"claude-memory:/app/dist",
|
||||||
|
"--rm",
|
||||||
|
"mcp/memory"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### System Prompt
|
||||||
|
|
||||||
|
The prompt for utilizing memory depends on the use case. Changing the prompt will help the model determine the frequency and types of memories created.
|
||||||
|
|
||||||
|
Here is an example prompt for chat personalization. You could use this prompt in the "Custom Instructions" field of a [Claude.ai Project](https://www.anthropic.com/news/projects).
|
||||||
|
|
||||||
|
```
|
||||||
|
Follow these steps for each interaction:
|
||||||
|
|
||||||
|
1. User Identification:
|
||||||
|
- You should assume that you are interacting with default_user
|
||||||
|
- If you have not identified default_user, proactively try to do so.
|
||||||
|
|
||||||
|
2. Memory Retrieval:
|
||||||
|
- Always begin your chat by saying only "Remembering..." and retrieve all relevant information from your knowledge graph
|
||||||
|
- Always refer to your knowledge graph as your "memory"
|
||||||
|
|
||||||
|
3. Memory
|
||||||
|
- While conversing with the user, be attentive to any new information that falls into these categories:
|
||||||
|
a) Basic Identity (age, gender, location, job title, education level, etc.)
|
||||||
|
b) Behaviors (interests, habits, etc.)
|
||||||
|
c) Preferences (communication style, preferred language, etc.)
|
||||||
|
d) Goals (goals, targets, aspirations, etc.)
|
||||||
|
e) Relationships (personal and professional relationships up to 3 degrees of separation)
|
||||||
|
|
||||||
|
4. Memory Update:
|
||||||
|
- If any new information was gathered during the interaction, update your memory as follows:
|
||||||
|
a) Create entities for recurring organizations, people, and significant events
|
||||||
|
b) Connect them to the current entities using relations
|
||||||
|
c) Store facts about them as observations
|
||||||
|
```
|
||||||
|
|
||||||
|
## Building
|
||||||
|
|
||||||
|
Docker:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker build -t mcp/memory -f src/memory/Dockerfile .
|
||||||
|
```
|
||||||
|
|
||||||
|
For Awareness: a prior mcp/memory volume contains an index.js file that could be overwritten by the new container. If you are using a docker volume for storage, delete the old docker volume's `index.js` file before starting the new container.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
|
||||||
156
mcpServer/modules/memory/__tests__/file-path.test.ts
Normal file
156
mcpServer/modules/memory/__tests__/file-path.test.ts
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
|
import { promises as fs } from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import { fileURLToPath } from 'url';
|
||||||
|
import { ensureMemoryFilePath, defaultMemoryPath } from '../index.js';
|
||||||
|
|
||||||
|
describe('ensureMemoryFilePath', () => {
|
||||||
|
const testDir = path.dirname(fileURLToPath(import.meta.url));
|
||||||
|
const oldMemoryPath = path.join(testDir, '..', 'memory.json');
|
||||||
|
const newMemoryPath = path.join(testDir, '..', 'memory.jsonl');
|
||||||
|
|
||||||
|
let originalEnv: string | undefined;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Save original environment variable
|
||||||
|
originalEnv = process.env.MEMORY_FILE_PATH;
|
||||||
|
// Delete environment variable
|
||||||
|
delete process.env.MEMORY_FILE_PATH;
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
// Restore original environment variable
|
||||||
|
if (originalEnv !== undefined) {
|
||||||
|
process.env.MEMORY_FILE_PATH = originalEnv;
|
||||||
|
} else {
|
||||||
|
delete process.env.MEMORY_FILE_PATH;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up test files
|
||||||
|
try {
|
||||||
|
await fs.unlink(oldMemoryPath);
|
||||||
|
} catch {
|
||||||
|
// Ignore if file doesn't exist
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
await fs.unlink(newMemoryPath);
|
||||||
|
} catch {
|
||||||
|
// Ignore if file doesn't exist
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('with MEMORY_FILE_PATH environment variable', () => {
|
||||||
|
it('should return absolute path when MEMORY_FILE_PATH is absolute', async () => {
|
||||||
|
const absolutePath = '/tmp/custom-memory.jsonl';
|
||||||
|
process.env.MEMORY_FILE_PATH = absolutePath;
|
||||||
|
|
||||||
|
const result = await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
expect(result).toBe(absolutePath);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should convert relative path to absolute when MEMORY_FILE_PATH is relative', async () => {
|
||||||
|
const relativePath = 'custom-memory.jsonl';
|
||||||
|
process.env.MEMORY_FILE_PATH = relativePath;
|
||||||
|
|
||||||
|
const result = await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
expect(path.isAbsolute(result)).toBe(true);
|
||||||
|
expect(result).toContain('custom-memory.jsonl');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle Windows absolute paths', async () => {
|
||||||
|
const windowsPath = 'C:\\temp\\memory.jsonl';
|
||||||
|
process.env.MEMORY_FILE_PATH = windowsPath;
|
||||||
|
|
||||||
|
const result = await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
// On Windows, should return as-is; on Unix, will be treated as relative
|
||||||
|
if (process.platform === 'win32') {
|
||||||
|
expect(result).toBe(windowsPath);
|
||||||
|
} else {
|
||||||
|
expect(path.isAbsolute(result)).toBe(true);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('without MEMORY_FILE_PATH environment variable', () => {
|
||||||
|
it('should return default path when no files exist', async () => {
|
||||||
|
const result = await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
expect(result).toBe(defaultMemoryPath);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should migrate from memory.json to memory.jsonl when only old file exists', async () => {
|
||||||
|
// Create old memory.json file
|
||||||
|
await fs.writeFile(oldMemoryPath, '{"test":"data"}');
|
||||||
|
|
||||||
|
const consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {});
|
||||||
|
|
||||||
|
const result = await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
expect(result).toBe(defaultMemoryPath);
|
||||||
|
|
||||||
|
// Verify migration happened
|
||||||
|
const newFileExists = await fs.access(newMemoryPath).then(() => true).catch(() => false);
|
||||||
|
const oldFileExists = await fs.access(oldMemoryPath).then(() => true).catch(() => false);
|
||||||
|
|
||||||
|
expect(newFileExists).toBe(true);
|
||||||
|
expect(oldFileExists).toBe(false);
|
||||||
|
|
||||||
|
// Verify console messages
|
||||||
|
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
||||||
|
expect.stringContaining('DETECTED: Found legacy memory.json file')
|
||||||
|
);
|
||||||
|
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
||||||
|
expect.stringContaining('COMPLETED: Successfully migrated')
|
||||||
|
);
|
||||||
|
|
||||||
|
consoleErrorSpy.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use new file when both old and new files exist', async () => {
|
||||||
|
// Create both files
|
||||||
|
await fs.writeFile(oldMemoryPath, '{"old":"data"}');
|
||||||
|
await fs.writeFile(newMemoryPath, '{"new":"data"}');
|
||||||
|
|
||||||
|
const consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {});
|
||||||
|
|
||||||
|
const result = await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
expect(result).toBe(defaultMemoryPath);
|
||||||
|
|
||||||
|
// Verify no migration happened (both files should still exist)
|
||||||
|
const newFileExists = await fs.access(newMemoryPath).then(() => true).catch(() => false);
|
||||||
|
const oldFileExists = await fs.access(oldMemoryPath).then(() => true).catch(() => false);
|
||||||
|
|
||||||
|
expect(newFileExists).toBe(true);
|
||||||
|
expect(oldFileExists).toBe(true);
|
||||||
|
|
||||||
|
// Verify no console messages about migration
|
||||||
|
expect(consoleErrorSpy).not.toHaveBeenCalled();
|
||||||
|
|
||||||
|
consoleErrorSpy.mockRestore();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should preserve file content during migration', async () => {
|
||||||
|
const testContent = '{"entities": [{"name": "test", "type": "person"}]}';
|
||||||
|
await fs.writeFile(oldMemoryPath, testContent);
|
||||||
|
|
||||||
|
await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
const migratedContent = await fs.readFile(newMemoryPath, 'utf-8');
|
||||||
|
expect(migratedContent).toBe(testContent);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('defaultMemoryPath', () => {
|
||||||
|
it('should end with memory.jsonl', () => {
|
||||||
|
expect(defaultMemoryPath).toMatch(/memory\.jsonl$/);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should be an absolute path', () => {
|
||||||
|
expect(path.isAbsolute(defaultMemoryPath)).toBe(true);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
483
mcpServer/modules/memory/__tests__/knowledge-graph.test.ts
Normal file
483
mcpServer/modules/memory/__tests__/knowledge-graph.test.ts
Normal file
@@ -0,0 +1,483 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { promises as fs } from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import { fileURLToPath } from 'url';
|
||||||
|
import { KnowledgeGraphManager, Entity, Relation, KnowledgeGraph } from '../index.js';
|
||||||
|
|
||||||
|
describe('KnowledgeGraphManager', () => {
|
||||||
|
let manager: KnowledgeGraphManager;
|
||||||
|
let testFilePath: string;
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
|
// Create a temporary test file path
|
||||||
|
testFilePath = path.join(
|
||||||
|
path.dirname(fileURLToPath(import.meta.url)),
|
||||||
|
`test-memory-${Date.now()}.jsonl`
|
||||||
|
);
|
||||||
|
manager = new KnowledgeGraphManager(testFilePath);
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
// Clean up test file
|
||||||
|
try {
|
||||||
|
await fs.unlink(testFilePath);
|
||||||
|
} catch (error) {
|
||||||
|
// Ignore errors if file doesn't exist
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('createEntities', () => {
|
||||||
|
it('should create new entities', async () => {
|
||||||
|
const entities: Entity[] = [
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme Corp'] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: ['likes programming'] },
|
||||||
|
];
|
||||||
|
|
||||||
|
const newEntities = await manager.createEntities(entities);
|
||||||
|
expect(newEntities).toHaveLength(2);
|
||||||
|
expect(newEntities).toEqual(entities);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not create duplicate entities', async () => {
|
||||||
|
const entities: Entity[] = [
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme Corp'] },
|
||||||
|
];
|
||||||
|
|
||||||
|
await manager.createEntities(entities);
|
||||||
|
const newEntities = await manager.createEntities(entities);
|
||||||
|
|
||||||
|
expect(newEntities).toHaveLength(0);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty entity arrays', async () => {
|
||||||
|
const newEntities = await manager.createEntities([]);
|
||||||
|
expect(newEntities).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('createRelations', () => {
|
||||||
|
it('should create new relations', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const relations: Relation[] = [
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const newRelations = await manager.createRelations(relations);
|
||||||
|
expect(newRelations).toHaveLength(1);
|
||||||
|
expect(newRelations).toEqual(relations);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.relations).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not create duplicate relations', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const relations: Relation[] = [
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
];
|
||||||
|
|
||||||
|
await manager.createRelations(relations);
|
||||||
|
const newRelations = await manager.createRelations(relations);
|
||||||
|
|
||||||
|
expect(newRelations).toHaveLength(0);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.relations).toHaveLength(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty relation arrays', async () => {
|
||||||
|
const newRelations = await manager.createRelations([]);
|
||||||
|
expect(newRelations).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('addObservations', () => {
|
||||||
|
it('should add observations to existing entities', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme Corp'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const results = await manager.addObservations([
|
||||||
|
{ entityName: 'Alice', contents: ['likes coffee', 'has a dog'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
expect(results).toHaveLength(1);
|
||||||
|
expect(results[0].entityName).toBe('Alice');
|
||||||
|
expect(results[0].addedObservations).toHaveLength(2);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
const alice = graph.entities.find(e => e.name === 'Alice');
|
||||||
|
expect(alice?.observations).toHaveLength(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should not add duplicate observations', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme Corp'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.addObservations([
|
||||||
|
{ entityName: 'Alice', contents: ['likes coffee'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const results = await manager.addObservations([
|
||||||
|
{ entityName: 'Alice', contents: ['likes coffee', 'has a dog'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
expect(results[0].addedObservations).toHaveLength(1);
|
||||||
|
expect(results[0].addedObservations).toContain('has a dog');
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
const alice = graph.entities.find(e => e.name === 'Alice');
|
||||||
|
expect(alice?.observations).toHaveLength(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw error for non-existent entity', async () => {
|
||||||
|
await expect(
|
||||||
|
manager.addObservations([
|
||||||
|
{ entityName: 'NonExistent', contents: ['some observation'] },
|
||||||
|
])
|
||||||
|
).rejects.toThrow('Entity with name NonExistent not found');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('deleteEntities', () => {
|
||||||
|
it('should delete entities', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.deleteEntities(['Alice']);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(1);
|
||||||
|
expect(graph.entities[0].name).toBe('Bob');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should cascade delete relations when deleting entities', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Charlie', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
{ from: 'Bob', to: 'Charlie', relationType: 'knows' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.deleteEntities(['Bob']);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(2);
|
||||||
|
expect(graph.relations).toHaveLength(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle deleting non-existent entities', async () => {
|
||||||
|
await manager.deleteEntities(['NonExistent']);
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('deleteObservations', () => {
|
||||||
|
it('should delete observations from entities', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme Corp', 'likes coffee'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.deleteObservations([
|
||||||
|
{ entityName: 'Alice', observations: ['likes coffee'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
const alice = graph.entities.find(e => e.name === 'Alice');
|
||||||
|
expect(alice?.observations).toHaveLength(1);
|
||||||
|
expect(alice?.observations).toContain('works at Acme Corp');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle deleting from non-existent entities', async () => {
|
||||||
|
await manager.deleteObservations([
|
||||||
|
{ entityName: 'NonExistent', observations: ['some observation'] },
|
||||||
|
]);
|
||||||
|
// Should not throw error
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('deleteRelations', () => {
|
||||||
|
it('should delete specific relations', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'works_with' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.deleteRelations([
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.relations).toHaveLength(1);
|
||||||
|
expect(graph.relations[0].relationType).toBe('works_with');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('readGraph', () => {
|
||||||
|
it('should return empty graph when file does not exist', async () => {
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(0);
|
||||||
|
expect(graph.relations).toHaveLength(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return complete graph with entities and relations', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme Corp'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Alice', relationType: 'self' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
const graph = await manager.readGraph();
|
||||||
|
expect(graph.entities).toHaveLength(1);
|
||||||
|
expect(graph.relations).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('searchNodes', () => {
|
||||||
|
beforeEach(async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme Corp', 'likes programming'] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: ['works at TechCo'] },
|
||||||
|
{ name: 'Acme Corp', entityType: 'company', observations: ['tech company'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Acme Corp', relationType: 'works_at' },
|
||||||
|
{ from: 'Bob', to: 'Acme Corp', relationType: 'competitor' },
|
||||||
|
]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should search by entity name', async () => {
|
||||||
|
const result = await manager.searchNodes('Alice');
|
||||||
|
expect(result.entities).toHaveLength(1);
|
||||||
|
expect(result.entities[0].name).toBe('Alice');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should search by entity type', async () => {
|
||||||
|
const result = await manager.searchNodes('company');
|
||||||
|
expect(result.entities).toHaveLength(1);
|
||||||
|
expect(result.entities[0].name).toBe('Acme Corp');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should search by observation content', async () => {
|
||||||
|
const result = await manager.searchNodes('programming');
|
||||||
|
expect(result.entities).toHaveLength(1);
|
||||||
|
expect(result.entities[0].name).toBe('Alice');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should be case insensitive', async () => {
|
||||||
|
const result = await manager.searchNodes('ALICE');
|
||||||
|
expect(result.entities).toHaveLength(1);
|
||||||
|
expect(result.entities[0].name).toBe('Alice');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include relations between matched entities', async () => {
|
||||||
|
const result = await manager.searchNodes('Acme');
|
||||||
|
expect(result.entities).toHaveLength(2); // Alice and Acme Corp
|
||||||
|
expect(result.relations).toHaveLength(1); // Only Alice -> Acme Corp relation
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return empty graph for no matches', async () => {
|
||||||
|
const result = await manager.searchNodes('NonExistent');
|
||||||
|
expect(result.entities).toHaveLength(0);
|
||||||
|
expect(result.relations).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('openNodes', () => {
|
||||||
|
beforeEach(async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Charlie', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
{ from: 'Bob', to: 'Charlie', relationType: 'knows' },
|
||||||
|
]);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should open specific nodes by name', async () => {
|
||||||
|
const result = await manager.openNodes(['Alice', 'Bob']);
|
||||||
|
expect(result.entities).toHaveLength(2);
|
||||||
|
expect(result.entities.map(e => e.name)).toContain('Alice');
|
||||||
|
expect(result.entities.map(e => e.name)).toContain('Bob');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should include relations between opened nodes', async () => {
|
||||||
|
const result = await manager.openNodes(['Alice', 'Bob']);
|
||||||
|
expect(result.relations).toHaveLength(1);
|
||||||
|
expect(result.relations[0].from).toBe('Alice');
|
||||||
|
expect(result.relations[0].to).toBe('Bob');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should exclude relations to unopened nodes', async () => {
|
||||||
|
const result = await manager.openNodes(['Bob']);
|
||||||
|
expect(result.relations).toHaveLength(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle opening non-existent nodes', async () => {
|
||||||
|
const result = await manager.openNodes(['NonExistent']);
|
||||||
|
expect(result.entities).toHaveLength(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle empty node list', async () => {
|
||||||
|
const result = await manager.openNodes([]);
|
||||||
|
expect(result.entities).toHaveLength(0);
|
||||||
|
expect(result.relations).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('file persistence', () => {
|
||||||
|
it('should persist data across manager instances', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['persistent data'] },
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Create new manager instance with same file path
|
||||||
|
const manager2 = new KnowledgeGraphManager(testFilePath);
|
||||||
|
const graph = await manager2.readGraph();
|
||||||
|
|
||||||
|
expect(graph.entities).toHaveLength(1);
|
||||||
|
expect(graph.entities[0].name).toBe('Alice');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle JSONL format correctly', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Alice', relationType: 'self' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Read file directly
|
||||||
|
const fileContent = await fs.readFile(testFilePath, 'utf-8');
|
||||||
|
const lines = fileContent.split('\n').filter(line => line.trim());
|
||||||
|
|
||||||
|
expect(lines).toHaveLength(2);
|
||||||
|
expect(JSON.parse(lines[0])).toHaveProperty('type', 'entity');
|
||||||
|
expect(JSON.parse(lines[1])).toHaveProperty('type', 'relation');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip type field from entities when loading from file', async () => {
|
||||||
|
// Create entities and relations (these get saved with type field)
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['test observation'] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Verify file contains type field (order may vary)
|
||||||
|
const fileContent = await fs.readFile(testFilePath, 'utf-8');
|
||||||
|
const fileLines = fileContent.split('\n').filter(line => line.trim());
|
||||||
|
const fileItems = fileLines.map(line => JSON.parse(line));
|
||||||
|
const fileEntity = fileItems.find(item => item.type === 'entity');
|
||||||
|
const fileRelation = fileItems.find(item => item.type === 'relation');
|
||||||
|
expect(fileEntity).toBeDefined();
|
||||||
|
expect(fileEntity).toHaveProperty('type', 'entity');
|
||||||
|
expect(fileRelation).toBeDefined();
|
||||||
|
expect(fileRelation).toHaveProperty('type', 'relation');
|
||||||
|
|
||||||
|
// Create new manager instance to force reload from file
|
||||||
|
const manager2 = new KnowledgeGraphManager(testFilePath);
|
||||||
|
const graph = await manager2.readGraph();
|
||||||
|
|
||||||
|
// Verify loaded entities don't have type field
|
||||||
|
expect(graph.entities).toHaveLength(2);
|
||||||
|
graph.entities.forEach(entity => {
|
||||||
|
expect(entity).not.toHaveProperty('type');
|
||||||
|
expect(entity).toHaveProperty('name');
|
||||||
|
expect(entity).toHaveProperty('entityType');
|
||||||
|
expect(entity).toHaveProperty('observations');
|
||||||
|
});
|
||||||
|
|
||||||
|
// Verify loaded relations don't have type field
|
||||||
|
expect(graph.relations).toHaveLength(1);
|
||||||
|
graph.relations.forEach(relation => {
|
||||||
|
expect(relation).not.toHaveProperty('type');
|
||||||
|
expect(relation).toHaveProperty('from');
|
||||||
|
expect(relation).toHaveProperty('to');
|
||||||
|
expect(relation).toHaveProperty('relationType');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip type field from searchNodes results', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: ['works at Acme'] },
|
||||||
|
]);
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Alice', relationType: 'self' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Create new manager instance to force reload from file
|
||||||
|
const manager2 = new KnowledgeGraphManager(testFilePath);
|
||||||
|
const result = await manager2.searchNodes('Alice');
|
||||||
|
|
||||||
|
// Verify search results don't have type field
|
||||||
|
expect(result.entities).toHaveLength(1);
|
||||||
|
expect(result.entities[0]).not.toHaveProperty('type');
|
||||||
|
expect(result.entities[0].name).toBe('Alice');
|
||||||
|
|
||||||
|
expect(result.relations).toHaveLength(1);
|
||||||
|
expect(result.relations[0]).not.toHaveProperty('type');
|
||||||
|
expect(result.relations[0].from).toBe('Alice');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should strip type field from openNodes results', async () => {
|
||||||
|
await manager.createEntities([
|
||||||
|
{ name: 'Alice', entityType: 'person', observations: [] },
|
||||||
|
{ name: 'Bob', entityType: 'person', observations: [] },
|
||||||
|
]);
|
||||||
|
await manager.createRelations([
|
||||||
|
{ from: 'Alice', to: 'Bob', relationType: 'knows' },
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Create new manager instance to force reload from file
|
||||||
|
const manager2 = new KnowledgeGraphManager(testFilePath);
|
||||||
|
const result = await manager2.openNodes(['Alice', 'Bob']);
|
||||||
|
|
||||||
|
// Verify open results don't have type field
|
||||||
|
expect(result.entities).toHaveLength(2);
|
||||||
|
result.entities.forEach(entity => {
|
||||||
|
expect(entity).not.toHaveProperty('type');
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(result.relations).toHaveLength(1);
|
||||||
|
expect(result.relations[0]).not.toHaveProperty('type');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
547
mcpServer/modules/memory/index.ts
Normal file
547
mcpServer/modules/memory/index.ts
Normal file
@@ -0,0 +1,547 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
||||||
|
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
|
||||||
|
import { z } from "zod";
|
||||||
|
import { promises as fs } from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import { fileURLToPath } from 'url';
|
||||||
|
import * as http from "http";
|
||||||
|
|
||||||
|
// Define memory file path using environment variable with fallback
|
||||||
|
export const defaultMemoryPath = path.join(path.dirname(fileURLToPath(import.meta.url)), 'memory.jsonl');
|
||||||
|
|
||||||
|
// Handle backward compatibility: migrate memory.json to memory.jsonl if needed
|
||||||
|
export async function ensureMemoryFilePath(): Promise<string> {
|
||||||
|
if (process.env.MEMORY_FILE_PATH) {
|
||||||
|
// Custom path provided, use it as-is (with absolute path resolution)
|
||||||
|
return path.isAbsolute(process.env.MEMORY_FILE_PATH)
|
||||||
|
? process.env.MEMORY_FILE_PATH
|
||||||
|
: path.join(path.dirname(fileURLToPath(import.meta.url)), process.env.MEMORY_FILE_PATH);
|
||||||
|
}
|
||||||
|
|
||||||
|
// No custom path set, check for backward compatibility migration
|
||||||
|
const oldMemoryPath = path.join(path.dirname(fileURLToPath(import.meta.url)), 'memory.json');
|
||||||
|
const newMemoryPath = defaultMemoryPath;
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Check if old file exists and new file doesn't
|
||||||
|
await fs.access(oldMemoryPath);
|
||||||
|
try {
|
||||||
|
await fs.access(newMemoryPath);
|
||||||
|
// Both files exist, use new one (no migration needed)
|
||||||
|
return newMemoryPath;
|
||||||
|
} catch {
|
||||||
|
// Old file exists, new file doesn't - migrate
|
||||||
|
console.error('DETECTED: Found legacy memory.json file, migrating to memory.jsonl for JSONL format compatibility');
|
||||||
|
await fs.rename(oldMemoryPath, newMemoryPath);
|
||||||
|
console.error('COMPLETED: Successfully migrated memory.json to memory.jsonl');
|
||||||
|
return newMemoryPath;
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Old file doesn't exist, use new path
|
||||||
|
return newMemoryPath;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize memory file path (will be set during startup)
|
||||||
|
let MEMORY_FILE_PATH: string;
|
||||||
|
|
||||||
|
// We are storing our memory using entities, relations, and observations in a graph structure
|
||||||
|
export interface Entity {
|
||||||
|
name: string;
|
||||||
|
entityType: string;
|
||||||
|
observations: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface Relation {
|
||||||
|
from: string;
|
||||||
|
to: string;
|
||||||
|
relationType: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface KnowledgeGraph {
|
||||||
|
entities: Entity[];
|
||||||
|
relations: Relation[];
|
||||||
|
}
|
||||||
|
|
||||||
|
// The KnowledgeGraphManager class contains all operations to interact with the knowledge graph
|
||||||
|
export class KnowledgeGraphManager {
|
||||||
|
constructor(private memoryFilePath: string) {}
|
||||||
|
|
||||||
|
private async loadGraph(): Promise<KnowledgeGraph> {
|
||||||
|
try {
|
||||||
|
const data = await fs.readFile(this.memoryFilePath, "utf-8");
|
||||||
|
const lines = data.split("\n").filter(line => line.trim() !== "");
|
||||||
|
return lines.reduce((graph: KnowledgeGraph, line) => {
|
||||||
|
const item = JSON.parse(line);
|
||||||
|
if (item.type === "entity") {
|
||||||
|
graph.entities.push({
|
||||||
|
name: item.name,
|
||||||
|
entityType: item.entityType,
|
||||||
|
observations: item.observations
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (item.type === "relation") {
|
||||||
|
graph.relations.push({
|
||||||
|
from: item.from,
|
||||||
|
to: item.to,
|
||||||
|
relationType: item.relationType
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return graph;
|
||||||
|
}, { entities: [], relations: [] });
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof Error && 'code' in error && (error as any).code === "ENOENT") {
|
||||||
|
return { entities: [], relations: [] };
|
||||||
|
}
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private async saveGraph(graph: KnowledgeGraph): Promise<void> {
|
||||||
|
const lines = [
|
||||||
|
...graph.entities.map(e => JSON.stringify({
|
||||||
|
type: "entity",
|
||||||
|
name: e.name,
|
||||||
|
entityType: e.entityType,
|
||||||
|
observations: e.observations
|
||||||
|
})),
|
||||||
|
...graph.relations.map(r => JSON.stringify({
|
||||||
|
type: "relation",
|
||||||
|
from: r.from,
|
||||||
|
to: r.to,
|
||||||
|
relationType: r.relationType
|
||||||
|
})),
|
||||||
|
];
|
||||||
|
await fs.writeFile(this.memoryFilePath, lines.join("\n"));
|
||||||
|
}
|
||||||
|
|
||||||
|
async createEntities(entities: Entity[]): Promise<Entity[]> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
const newEntities = entities.filter(e => !graph.entities.some(existingEntity => existingEntity.name === e.name));
|
||||||
|
graph.entities.push(...newEntities);
|
||||||
|
await this.saveGraph(graph);
|
||||||
|
return newEntities;
|
||||||
|
}
|
||||||
|
|
||||||
|
async createRelations(relations: Relation[]): Promise<Relation[]> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
const newRelations = relations.filter(r => !graph.relations.some(existingRelation =>
|
||||||
|
existingRelation.from === r.from &&
|
||||||
|
existingRelation.to === r.to &&
|
||||||
|
existingRelation.relationType === r.relationType
|
||||||
|
));
|
||||||
|
graph.relations.push(...newRelations);
|
||||||
|
await this.saveGraph(graph);
|
||||||
|
return newRelations;
|
||||||
|
}
|
||||||
|
|
||||||
|
async addObservations(observations: { entityName: string; contents: string[] }[]): Promise<{ entityName: string; addedObservations: string[] }[]> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
const results = observations.map(o => {
|
||||||
|
const entity = graph.entities.find(e => e.name === o.entityName);
|
||||||
|
if (!entity) {
|
||||||
|
throw new Error(`Entity with name ${o.entityName} not found`);
|
||||||
|
}
|
||||||
|
const newObservations = o.contents.filter(content => !entity.observations.includes(content));
|
||||||
|
entity.observations.push(...newObservations);
|
||||||
|
return { entityName: o.entityName, addedObservations: newObservations };
|
||||||
|
});
|
||||||
|
await this.saveGraph(graph);
|
||||||
|
return results;
|
||||||
|
}
|
||||||
|
|
||||||
|
async deleteEntities(entityNames: string[]): Promise<void> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
graph.entities = graph.entities.filter(e => !entityNames.includes(e.name));
|
||||||
|
graph.relations = graph.relations.filter(r => !entityNames.includes(r.from) && !entityNames.includes(r.to));
|
||||||
|
await this.saveGraph(graph);
|
||||||
|
}
|
||||||
|
|
||||||
|
async deleteObservations(deletions: { entityName: string; observations: string[] }[]): Promise<void> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
deletions.forEach(d => {
|
||||||
|
const entity = graph.entities.find(e => e.name === d.entityName);
|
||||||
|
if (entity) {
|
||||||
|
entity.observations = entity.observations.filter(o => !d.observations.includes(o));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
await this.saveGraph(graph);
|
||||||
|
}
|
||||||
|
|
||||||
|
async deleteRelations(relations: Relation[]): Promise<void> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
graph.relations = graph.relations.filter(r => !relations.some(delRelation =>
|
||||||
|
r.from === delRelation.from &&
|
||||||
|
r.to === delRelation.to &&
|
||||||
|
r.relationType === delRelation.relationType
|
||||||
|
));
|
||||||
|
await this.saveGraph(graph);
|
||||||
|
}
|
||||||
|
|
||||||
|
async readGraph(): Promise<KnowledgeGraph> {
|
||||||
|
return this.loadGraph();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Very basic search function
|
||||||
|
async searchNodes(query: string): Promise<KnowledgeGraph> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
|
||||||
|
// Filter entities
|
||||||
|
const filteredEntities = graph.entities.filter(e =>
|
||||||
|
e.name.toLowerCase().includes(query.toLowerCase()) ||
|
||||||
|
e.entityType.toLowerCase().includes(query.toLowerCase()) ||
|
||||||
|
e.observations.some(o => o.toLowerCase().includes(query.toLowerCase()))
|
||||||
|
);
|
||||||
|
|
||||||
|
// Create a Set of filtered entity names for quick lookup
|
||||||
|
const filteredEntityNames = new Set(filteredEntities.map(e => e.name));
|
||||||
|
|
||||||
|
// Filter relations to only include those between filtered entities
|
||||||
|
const filteredRelations = graph.relations.filter(r =>
|
||||||
|
filteredEntityNames.has(r.from) && filteredEntityNames.has(r.to)
|
||||||
|
);
|
||||||
|
|
||||||
|
const filteredGraph: KnowledgeGraph = {
|
||||||
|
entities: filteredEntities,
|
||||||
|
relations: filteredRelations,
|
||||||
|
};
|
||||||
|
|
||||||
|
return filteredGraph;
|
||||||
|
}
|
||||||
|
|
||||||
|
async openNodes(names: string[]): Promise<KnowledgeGraph> {
|
||||||
|
const graph = await this.loadGraph();
|
||||||
|
|
||||||
|
// Filter entities
|
||||||
|
const filteredEntities = graph.entities.filter(e => names.includes(e.name));
|
||||||
|
|
||||||
|
// Create a Set of filtered entity names for quick lookup
|
||||||
|
const filteredEntityNames = new Set(filteredEntities.map(e => e.name));
|
||||||
|
|
||||||
|
// Filter relations to only include those between filtered entities
|
||||||
|
const filteredRelations = graph.relations.filter(r =>
|
||||||
|
filteredEntityNames.has(r.from) && filteredEntityNames.has(r.to)
|
||||||
|
);
|
||||||
|
|
||||||
|
const filteredGraph: KnowledgeGraph = {
|
||||||
|
entities: filteredEntities,
|
||||||
|
relations: filteredRelations,
|
||||||
|
};
|
||||||
|
|
||||||
|
return filteredGraph;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let knowledgeGraphManager: KnowledgeGraphManager;
|
||||||
|
|
||||||
|
// Zod schemas for entities and relations
|
||||||
|
const EntitySchema = z.object({
|
||||||
|
name: z.string().describe("The name of the entity"),
|
||||||
|
entityType: z.string().describe("The type of the entity"),
|
||||||
|
observations: z.array(z.string()).describe("An array of observation contents associated with the entity")
|
||||||
|
});
|
||||||
|
|
||||||
|
const RelationSchema = z.object({
|
||||||
|
from: z.string().describe("The name of the entity where the relation starts"),
|
||||||
|
to: z.string().describe("The name of the entity where the relation ends"),
|
||||||
|
relationType: z.string().describe("The type of the relation")
|
||||||
|
});
|
||||||
|
|
||||||
|
// The server instance and tools exposed to Claude
|
||||||
|
const server = new McpServer({
|
||||||
|
name: "memory-server",
|
||||||
|
version: "0.6.3",
|
||||||
|
});
|
||||||
|
|
||||||
|
// Register create_entities tool
|
||||||
|
server.registerTool(
|
||||||
|
"create_entities",
|
||||||
|
{
|
||||||
|
title: "Create Entities",
|
||||||
|
description: "Create multiple new entities in the knowledge graph",
|
||||||
|
inputSchema: {
|
||||||
|
entities: z.array(EntitySchema)
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
entities: z.array(EntitySchema)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ entities }) => {
|
||||||
|
const result = await knowledgeGraphManager.createEntities(entities);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: JSON.stringify(result, null, 2) }],
|
||||||
|
structuredContent: { entities: result }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register create_relations tool
|
||||||
|
server.registerTool(
|
||||||
|
"create_relations",
|
||||||
|
{
|
||||||
|
title: "Create Relations",
|
||||||
|
description: "Create multiple new relations between entities in the knowledge graph. Relations should be in active voice",
|
||||||
|
inputSchema: {
|
||||||
|
relations: z.array(RelationSchema)
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
relations: z.array(RelationSchema)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ relations }) => {
|
||||||
|
const result = await knowledgeGraphManager.createRelations(relations);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: JSON.stringify(result, null, 2) }],
|
||||||
|
structuredContent: { relations: result }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register add_observations tool
|
||||||
|
server.registerTool(
|
||||||
|
"add_observations",
|
||||||
|
{
|
||||||
|
title: "Add Observations",
|
||||||
|
description: "Add new observations to existing entities in the knowledge graph",
|
||||||
|
inputSchema: {
|
||||||
|
observations: z.array(z.object({
|
||||||
|
entityName: z.string().describe("The name of the entity to add the observations to"),
|
||||||
|
contents: z.array(z.string()).describe("An array of observation contents to add")
|
||||||
|
}))
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
results: z.array(z.object({
|
||||||
|
entityName: z.string(),
|
||||||
|
addedObservations: z.array(z.string())
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ observations }) => {
|
||||||
|
const result = await knowledgeGraphManager.addObservations(observations);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: JSON.stringify(result, null, 2) }],
|
||||||
|
structuredContent: { results: result }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register delete_entities tool
|
||||||
|
server.registerTool(
|
||||||
|
"delete_entities",
|
||||||
|
{
|
||||||
|
title: "Delete Entities",
|
||||||
|
description: "Delete multiple entities and their associated relations from the knowledge graph",
|
||||||
|
inputSchema: {
|
||||||
|
entityNames: z.array(z.string()).describe("An array of entity names to delete")
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
success: z.boolean(),
|
||||||
|
message: z.string()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ entityNames }) => {
|
||||||
|
await knowledgeGraphManager.deleteEntities(entityNames);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: "Entities deleted successfully" }],
|
||||||
|
structuredContent: { success: true, message: "Entities deleted successfully" }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register delete_observations tool
|
||||||
|
server.registerTool(
|
||||||
|
"delete_observations",
|
||||||
|
{
|
||||||
|
title: "Delete Observations",
|
||||||
|
description: "Delete specific observations from entities in the knowledge graph",
|
||||||
|
inputSchema: {
|
||||||
|
deletions: z.array(z.object({
|
||||||
|
entityName: z.string().describe("The name of the entity containing the observations"),
|
||||||
|
observations: z.array(z.string()).describe("An array of observations to delete")
|
||||||
|
}))
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
success: z.boolean(),
|
||||||
|
message: z.string()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ deletions }) => {
|
||||||
|
await knowledgeGraphManager.deleteObservations(deletions);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: "Observations deleted successfully" }],
|
||||||
|
structuredContent: { success: true, message: "Observations deleted successfully" }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register delete_relations tool
|
||||||
|
server.registerTool(
|
||||||
|
"delete_relations",
|
||||||
|
{
|
||||||
|
title: "Delete Relations",
|
||||||
|
description: "Delete multiple relations from the knowledge graph",
|
||||||
|
inputSchema: {
|
||||||
|
relations: z.array(RelationSchema).describe("An array of relations to delete")
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
success: z.boolean(),
|
||||||
|
message: z.string()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ relations }) => {
|
||||||
|
await knowledgeGraphManager.deleteRelations(relations);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: "Relations deleted successfully" }],
|
||||||
|
structuredContent: { success: true, message: "Relations deleted successfully" }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register read_graph tool
|
||||||
|
server.registerTool(
|
||||||
|
"read_graph",
|
||||||
|
{
|
||||||
|
title: "Read Graph",
|
||||||
|
description: "Read the entire knowledge graph",
|
||||||
|
inputSchema: {},
|
||||||
|
outputSchema: {
|
||||||
|
entities: z.array(EntitySchema),
|
||||||
|
relations: z.array(RelationSchema)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async () => {
|
||||||
|
const graph = await knowledgeGraphManager.readGraph();
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: JSON.stringify(graph, null, 2) }],
|
||||||
|
structuredContent: { ...graph }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register search_nodes tool
|
||||||
|
server.registerTool(
|
||||||
|
"search_nodes",
|
||||||
|
{
|
||||||
|
title: "Search Nodes",
|
||||||
|
description: "Search for nodes in the knowledge graph based on a query",
|
||||||
|
inputSchema: {
|
||||||
|
query: z.string().describe("The search query to match against entity names, types, and observation content")
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
entities: z.array(EntitySchema),
|
||||||
|
relations: z.array(RelationSchema)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ query }) => {
|
||||||
|
const graph = await knowledgeGraphManager.searchNodes(query);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: JSON.stringify(graph, null, 2) }],
|
||||||
|
structuredContent: { ...graph }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register open_nodes tool
|
||||||
|
server.registerTool(
|
||||||
|
"open_nodes",
|
||||||
|
{
|
||||||
|
title: "Open Nodes",
|
||||||
|
description: "Open specific nodes in the knowledge graph by their names",
|
||||||
|
inputSchema: {
|
||||||
|
names: z.array(z.string()).describe("An array of entity names to retrieve")
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
entities: z.array(EntitySchema),
|
||||||
|
relations: z.array(RelationSchema)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
async ({ names }) => {
|
||||||
|
const graph = await knowledgeGraphManager.openNodes(names);
|
||||||
|
return {
|
||||||
|
content: [{ type: "text" as const, text: JSON.stringify(graph, null, 2) }],
|
||||||
|
structuredContent: { ...graph }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// SSE transport session routing (sessionId -> transport)
|
||||||
|
const sseTransportsBySessionId = new Map<string, SSEServerTransport>();
|
||||||
|
|
||||||
|
function runServer() {
|
||||||
|
const port = Number(process.env.MCP_PORT ?? process.env.SSE_PORT ?? 3000);
|
||||||
|
|
||||||
|
const httpServer = http.createServer(async (req, res) => {
|
||||||
|
const url = new URL(req.url ?? "/", `http://${req.headers.host ?? "localhost"}`);
|
||||||
|
const pathname = url.pathname;
|
||||||
|
|
||||||
|
if (req.method === "GET" && (pathname === "/sse" || pathname === "/")) {
|
||||||
|
try {
|
||||||
|
const transport = new SSEServerTransport("/messages", res);
|
||||||
|
sseTransportsBySessionId.set(transport.sessionId, transport);
|
||||||
|
transport.onclose = () => {
|
||||||
|
sseTransportsBySessionId.delete(transport.sessionId);
|
||||||
|
};
|
||||||
|
const heartbeatInterval = setInterval(() => {
|
||||||
|
try {
|
||||||
|
if (!res.writableEnded) {
|
||||||
|
res.write(': heartbeat\n\n');
|
||||||
|
} else {
|
||||||
|
clearInterval(heartbeatInterval);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
clearInterval(heartbeatInterval);
|
||||||
|
}
|
||||||
|
}, 15_000);
|
||||||
|
res.on('close', () => clearInterval(heartbeatInterval));
|
||||||
|
await server.connect(transport);
|
||||||
|
console.error("Knowledge Graph MCP Server: new SSE client connected");
|
||||||
|
} catch (error) {
|
||||||
|
console.error("SSE connection error:", error);
|
||||||
|
if (!res.headersSent) {
|
||||||
|
res.writeHead(500).end("Internal server error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (req.method === "POST" && pathname === "/messages") {
|
||||||
|
const sessionId = url.searchParams.get("sessionId");
|
||||||
|
if (!sessionId) {
|
||||||
|
res.writeHead(400).end("Missing sessionId query parameter");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const transport = sseTransportsBySessionId.get(sessionId);
|
||||||
|
if (!transport) {
|
||||||
|
res.writeHead(404).end("Unknown session");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
await transport.handlePostMessage(req, res);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
res.writeHead(404).end("Not found");
|
||||||
|
});
|
||||||
|
|
||||||
|
httpServer.listen(port, () => {
|
||||||
|
console.error(`Knowledge Graph MCP Server running on SSE at http://localhost:${port}`);
|
||||||
|
console.error(" GET /sse – open SSE stream (then POST to /messages?sessionId=...)");
|
||||||
|
console.error(" POST /messages?sessionId=<id> – send MCP messages");
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
// Initialize memory file path with backward compatibility
|
||||||
|
MEMORY_FILE_PATH = await ensureMemoryFilePath();
|
||||||
|
|
||||||
|
// Initialize knowledge graph manager with the memory file path
|
||||||
|
knowledgeGraphManager = new KnowledgeGraphManager(MEMORY_FILE_PATH);
|
||||||
|
|
||||||
|
runServer();
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch((error) => {
|
||||||
|
console.error("Fatal error in main():", error);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
37
mcpServer/modules/memory/package.json
Normal file
37
mcpServer/modules/memory/package.json
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
{
|
||||||
|
"name": "@modelcontextprotocol/server-memory",
|
||||||
|
"version": "0.6.3",
|
||||||
|
"description": "MCP server for enabling memory for Claude through a knowledge graph",
|
||||||
|
"license": "SEE LICENSE IN LICENSE",
|
||||||
|
"mcpName": "io.github.modelcontextprotocol/server-memory",
|
||||||
|
"author": "Model Context Protocol a Series of LF Projects, LLC.",
|
||||||
|
"homepage": "https://modelcontextprotocol.io",
|
||||||
|
"bugs": "https://github.com/modelcontextprotocol/servers/issues",
|
||||||
|
"repository": {
|
||||||
|
"type": "git",
|
||||||
|
"url": "https://github.com/modelcontextprotocol/servers.git"
|
||||||
|
},
|
||||||
|
"type": "module",
|
||||||
|
"bin": {
|
||||||
|
"mcp-server-memory": "dist/index.js"
|
||||||
|
},
|
||||||
|
"files": [
|
||||||
|
"dist"
|
||||||
|
],
|
||||||
|
"scripts": {
|
||||||
|
"build": "tsc && shx chmod +x dist/*.js",
|
||||||
|
"prepare": "npm run build",
|
||||||
|
"watch": "tsc --watch",
|
||||||
|
"test": "vitest run --coverage"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"@modelcontextprotocol/sdk": "^1.26.0"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/node": "^22",
|
||||||
|
"@vitest/coverage-v8": "^2.1.8",
|
||||||
|
"shx": "^0.3.4",
|
||||||
|
"typescript": "^5.6.2",
|
||||||
|
"vitest": "^2.1.8"
|
||||||
|
}
|
||||||
|
}
|
||||||
11
mcpServer/modules/memory/tsconfig.json
Normal file
11
mcpServer/modules/memory/tsconfig.json
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"outDir": "./dist",
|
||||||
|
"rootDir": ".",
|
||||||
|
"module": "NodeNext",
|
||||||
|
"moduleResolution": "NodeNext",
|
||||||
|
"esModuleInterop": true
|
||||||
|
},
|
||||||
|
"include": ["./**/*.ts"],
|
||||||
|
"exclude": ["**/*.test.ts", "vitest.config.ts"]
|
||||||
|
}
|
||||||
14
mcpServer/modules/memory/vitest.config.ts
Normal file
14
mcpServer/modules/memory/vitest.config.ts
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
import { defineConfig } from 'vitest/config';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
test: {
|
||||||
|
globals: true,
|
||||||
|
environment: 'node',
|
||||||
|
include: ['**/__tests__/**/*.test.ts'],
|
||||||
|
coverage: {
|
||||||
|
provider: 'v8',
|
||||||
|
include: ['**/*.ts'],
|
||||||
|
exclude: ['**/__tests__/**', '**/dist/**'],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
24
mcpServer/modules/sequentialthinking/Dockerfile
Normal file
24
mcpServer/modules/sequentialthinking/Dockerfile
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
FROM node:22.12-alpine AS builder
|
||||||
|
|
||||||
|
COPY . /app
|
||||||
|
COPY tsconfig.json /tsconfig.json
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN npm install
|
||||||
|
|
||||||
|
RUN npm ci --ignore-scripts --omit-dev
|
||||||
|
|
||||||
|
FROM node:22-alpine AS release
|
||||||
|
|
||||||
|
COPY --from=builder /app/dist /app/dist
|
||||||
|
COPY --from=builder /app/package.json /app/package.json
|
||||||
|
COPY --from=builder /app/package-lock.json /app/package-lock.json
|
||||||
|
|
||||||
|
ENV NODE_ENV=production
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN npm ci --ignore-scripts --omit-dev
|
||||||
|
|
||||||
|
ENTRYPOINT ["node", "dist/index.js"]
|
||||||
155
mcpServer/modules/sequentialthinking/README.md
Normal file
155
mcpServer/modules/sequentialthinking/README.md
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
# Sequential Thinking MCP Server
|
||||||
|
|
||||||
|
An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Break down complex problems into manageable steps
|
||||||
|
- Revise and refine thoughts as understanding deepens
|
||||||
|
- Branch into alternative paths of reasoning
|
||||||
|
- Adjust the total number of thoughts dynamically
|
||||||
|
- Generate and verify solution hypotheses
|
||||||
|
|
||||||
|
## Tool
|
||||||
|
|
||||||
|
### sequential_thinking
|
||||||
|
|
||||||
|
Facilitates a detailed, step-by-step thinking process for problem-solving and analysis.
|
||||||
|
|
||||||
|
**Inputs:**
|
||||||
|
- `thought` (string): The current thinking step
|
||||||
|
- `nextThoughtNeeded` (boolean): Whether another thought step is needed
|
||||||
|
- `thoughtNumber` (integer): Current thought number
|
||||||
|
- `totalThoughts` (integer): Estimated total thoughts needed
|
||||||
|
- `isRevision` (boolean, optional): Whether this revises previous thinking
|
||||||
|
- `revisesThought` (integer, optional): Which thought is being reconsidered
|
||||||
|
- `branchFromThought` (integer, optional): Branching point thought number
|
||||||
|
- `branchId` (string, optional): Branch identifier
|
||||||
|
- `needsMoreThoughts` (boolean, optional): If more thoughts are needed
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
The Sequential Thinking tool is designed for:
|
||||||
|
- Breaking down complex problems into steps
|
||||||
|
- Planning and design with room for revision
|
||||||
|
- Analysis that might need course correction
|
||||||
|
- Problems where the full scope might not be clear initially
|
||||||
|
- Tasks that need to maintain context over multiple steps
|
||||||
|
- Situations where irrelevant information needs to be filtered out
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Usage with Claude Desktop
|
||||||
|
|
||||||
|
Add this to your `claude_desktop_config.json`:
|
||||||
|
|
||||||
|
#### npx
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"sequential-thinking": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"@modelcontextprotocol/server-sequential-thinking"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### docker
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"sequentialthinking": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": [
|
||||||
|
"run",
|
||||||
|
"--rm",
|
||||||
|
"-i",
|
||||||
|
"mcp/sequentialthinking"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
To disable logging of thought information set env var: `DISABLE_THOUGHT_LOGGING` to `true`.
|
||||||
|
Comment
|
||||||
|
|
||||||
|
### Usage with VS Code
|
||||||
|
|
||||||
|
For quick installation, click one of the installation buttons below...
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=sequentialthinking&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40modelcontextprotocol%2Fserver-sequential-thinking%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=sequentialthinking&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40modelcontextprotocol%2Fserver-sequential-thinking%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
[](https://insiders.vscode.dev/redirect/mcp/install?name=sequentialthinking&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22mcp%2Fsequentialthinking%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=sequentialthinking&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22mcp%2Fsequentialthinking%22%5D%7D&quality=insiders)
|
||||||
|
|
||||||
|
For manual installation, you can configure the MCP server using one of these methods:
|
||||||
|
|
||||||
|
**Method 1: User Configuration (Recommended)**
|
||||||
|
Add the configuration to your user-level MCP configuration file. Open the Command Palette (`Ctrl + Shift + P`) and run `MCP: Open User Configuration`. This will open your user `mcp.json` file where you can add the server configuration.
|
||||||
|
|
||||||
|
**Method 2: Workspace Configuration**
|
||||||
|
Alternatively, you can add the configuration to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
|
||||||
|
|
||||||
|
> For more details about MCP configuration in VS Code, see the [official VS Code MCP documentation](https://code.visualstudio.com/docs/copilot/customization/mcp-servers).
|
||||||
|
|
||||||
|
For NPX installation:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"sequential-thinking": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"@modelcontextprotocol/server-sequential-thinking"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For Docker installation:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"sequential-thinking": {
|
||||||
|
"command": "docker",
|
||||||
|
"args": [
|
||||||
|
"run",
|
||||||
|
"--rm",
|
||||||
|
"-i",
|
||||||
|
"mcp/sequentialthinking"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage with Codex CLI
|
||||||
|
|
||||||
|
Run the following:
|
||||||
|
|
||||||
|
#### npx
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codex mcp add sequential-thinking npx -y @modelcontextprotocol/server-sequential-thinking
|
||||||
|
```
|
||||||
|
|
||||||
|
## Building
|
||||||
|
|
||||||
|
Docker:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -t mcp/sequentialthinking -f src/sequentialthinking/Dockerfile .
|
||||||
|
```
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
|
||||||
308
mcpServer/modules/sequentialthinking/__tests__/lib.test.ts
Normal file
308
mcpServer/modules/sequentialthinking/__tests__/lib.test.ts
Normal file
@@ -0,0 +1,308 @@
|
|||||||
|
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||||
|
import { SequentialThinkingServer, ThoughtData } from '../lib.js';
|
||||||
|
|
||||||
|
// Mock chalk to avoid ESM issues
|
||||||
|
vi.mock('chalk', () => {
|
||||||
|
const chalkMock = {
|
||||||
|
yellow: (str: string) => str,
|
||||||
|
green: (str: string) => str,
|
||||||
|
blue: (str: string) => str,
|
||||||
|
};
|
||||||
|
return {
|
||||||
|
default: chalkMock,
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('SequentialThinkingServer', () => {
|
||||||
|
let server: SequentialThinkingServer;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Disable thought logging for tests
|
||||||
|
process.env.DISABLE_THOUGHT_LOGGING = 'true';
|
||||||
|
server = new SequentialThinkingServer();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Note: Input validation tests removed - validation now happens at the tool
|
||||||
|
// registration layer via Zod schemas before processThought is called
|
||||||
|
|
||||||
|
describe('processThought - valid inputs', () => {
|
||||||
|
it('should accept valid basic thought', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'This is my first thought',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
expect(result.isError).toBeUndefined();
|
||||||
|
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
expect(data.thoughtNumber).toBe(1);
|
||||||
|
expect(data.totalThoughts).toBe(3);
|
||||||
|
expect(data.nextThoughtNeeded).toBe(true);
|
||||||
|
expect(data.thoughtHistoryLength).toBe(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should accept thought with optional fields', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Revising my earlier idea',
|
||||||
|
thoughtNumber: 2,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true,
|
||||||
|
isRevision: true,
|
||||||
|
revisesThought: 1,
|
||||||
|
needsMoreThoughts: false
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
expect(result.isError).toBeUndefined();
|
||||||
|
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
expect(data.thoughtNumber).toBe(2);
|
||||||
|
expect(data.thoughtHistoryLength).toBe(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should track multiple thoughts in history', () => {
|
||||||
|
const input1 = {
|
||||||
|
thought: 'First thought',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true
|
||||||
|
};
|
||||||
|
|
||||||
|
const input2 = {
|
||||||
|
thought: 'Second thought',
|
||||||
|
thoughtNumber: 2,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true
|
||||||
|
};
|
||||||
|
|
||||||
|
const input3 = {
|
||||||
|
thought: 'Final thought',
|
||||||
|
thoughtNumber: 3,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: false
|
||||||
|
};
|
||||||
|
|
||||||
|
server.processThought(input1);
|
||||||
|
server.processThought(input2);
|
||||||
|
const result = server.processThought(input3);
|
||||||
|
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
expect(data.thoughtHistoryLength).toBe(3);
|
||||||
|
expect(data.nextThoughtNeeded).toBe(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should auto-adjust totalThoughts if thoughtNumber exceeds it', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Thought 5',
|
||||||
|
thoughtNumber: 5,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
|
||||||
|
expect(data.totalThoughts).toBe(5);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('processThought - branching', () => {
|
||||||
|
it('should track branches correctly', () => {
|
||||||
|
const input1 = {
|
||||||
|
thought: 'Main thought',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true
|
||||||
|
};
|
||||||
|
|
||||||
|
const input2 = {
|
||||||
|
thought: 'Branch A thought',
|
||||||
|
thoughtNumber: 2,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true,
|
||||||
|
branchFromThought: 1,
|
||||||
|
branchId: 'branch-a'
|
||||||
|
};
|
||||||
|
|
||||||
|
const input3 = {
|
||||||
|
thought: 'Branch B thought',
|
||||||
|
thoughtNumber: 2,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: false,
|
||||||
|
branchFromThought: 1,
|
||||||
|
branchId: 'branch-b'
|
||||||
|
};
|
||||||
|
|
||||||
|
server.processThought(input1);
|
||||||
|
server.processThought(input2);
|
||||||
|
const result = server.processThought(input3);
|
||||||
|
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
expect(data.branches).toContain('branch-a');
|
||||||
|
expect(data.branches).toContain('branch-b');
|
||||||
|
expect(data.branches.length).toBe(2);
|
||||||
|
expect(data.thoughtHistoryLength).toBe(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should allow multiple thoughts in same branch', () => {
|
||||||
|
const input1 = {
|
||||||
|
thought: 'Branch thought 1',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 2,
|
||||||
|
nextThoughtNeeded: true,
|
||||||
|
branchFromThought: 1,
|
||||||
|
branchId: 'branch-a'
|
||||||
|
};
|
||||||
|
|
||||||
|
const input2 = {
|
||||||
|
thought: 'Branch thought 2',
|
||||||
|
thoughtNumber: 2,
|
||||||
|
totalThoughts: 2,
|
||||||
|
nextThoughtNeeded: false,
|
||||||
|
branchFromThought: 1,
|
||||||
|
branchId: 'branch-a'
|
||||||
|
};
|
||||||
|
|
||||||
|
server.processThought(input1);
|
||||||
|
const result = server.processThought(input2);
|
||||||
|
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
expect(data.branches).toContain('branch-a');
|
||||||
|
expect(data.branches.length).toBe(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('processThought - edge cases', () => {
|
||||||
|
it('should handle very long thought strings', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'a'.repeat(10000),
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 1,
|
||||||
|
nextThoughtNeeded: false
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
expect(result.isError).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle thoughtNumber = 1, totalThoughts = 1', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Only thought',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 1,
|
||||||
|
nextThoughtNeeded: false
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
expect(result.isError).toBeUndefined();
|
||||||
|
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
expect(data.thoughtNumber).toBe(1);
|
||||||
|
expect(data.totalThoughts).toBe(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should handle nextThoughtNeeded = false', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Final thought',
|
||||||
|
thoughtNumber: 3,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: false
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
const data = JSON.parse(result.content[0].text);
|
||||||
|
|
||||||
|
expect(data.nextThoughtNeeded).toBe(false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('processThought - response format', () => {
|
||||||
|
it('should return correct response structure on success', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Test thought',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 1,
|
||||||
|
nextThoughtNeeded: false
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
|
||||||
|
expect(result).toHaveProperty('content');
|
||||||
|
expect(Array.isArray(result.content)).toBe(true);
|
||||||
|
expect(result.content.length).toBe(1);
|
||||||
|
expect(result.content[0]).toHaveProperty('type', 'text');
|
||||||
|
expect(result.content[0]).toHaveProperty('text');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return valid JSON in response', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Test thought',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 1,
|
||||||
|
nextThoughtNeeded: false
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = server.processThought(input);
|
||||||
|
|
||||||
|
expect(() => JSON.parse(result.content[0].text)).not.toThrow();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('processThought - with logging enabled', () => {
|
||||||
|
let serverWithLogging: SequentialThinkingServer;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Enable thought logging for these tests
|
||||||
|
delete process.env.DISABLE_THOUGHT_LOGGING;
|
||||||
|
serverWithLogging = new SequentialThinkingServer();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Reset to disabled for other tests
|
||||||
|
process.env.DISABLE_THOUGHT_LOGGING = 'true';
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should format and log regular thoughts', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Test thought with logging',
|
||||||
|
thoughtNumber: 1,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = serverWithLogging.processThought(input);
|
||||||
|
expect(result.isError).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should format and log revision thoughts', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Revised thought',
|
||||||
|
thoughtNumber: 2,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: true,
|
||||||
|
isRevision: true,
|
||||||
|
revisesThought: 1
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = serverWithLogging.processThought(input);
|
||||||
|
expect(result.isError).toBeUndefined();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should format and log branch thoughts', () => {
|
||||||
|
const input = {
|
||||||
|
thought: 'Branch thought',
|
||||||
|
thoughtNumber: 2,
|
||||||
|
totalThoughts: 3,
|
||||||
|
nextThoughtNeeded: false,
|
||||||
|
branchFromThought: 1,
|
||||||
|
branchId: 'branch-a'
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = serverWithLogging.processThought(input);
|
||||||
|
expect(result.isError).toBeUndefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
174
mcpServer/modules/sequentialthinking/index.ts
Normal file
174
mcpServer/modules/sequentialthinking/index.ts
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
import * as http from "http";
|
||||||
|
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
||||||
|
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
|
||||||
|
import { z } from "zod";
|
||||||
|
import { SequentialThinkingServer } from "./lib.js";
|
||||||
|
|
||||||
|
const server = new McpServer({
|
||||||
|
name: "sequential-thinking-server",
|
||||||
|
version: "0.2.0",
|
||||||
|
});
|
||||||
|
|
||||||
|
const thinkingServer = new SequentialThinkingServer();
|
||||||
|
|
||||||
|
server.registerTool(
|
||||||
|
"sequentialthinking",
|
||||||
|
{
|
||||||
|
title: "Sequential Thinking",
|
||||||
|
description: `A detailed tool for dynamic and reflective problem-solving through thoughts.
|
||||||
|
This tool helps analyze problems through a flexible thinking process that can adapt and evolve.
|
||||||
|
Each thought can build on, question, or revise previous insights as understanding deepens.
|
||||||
|
|
||||||
|
When to use this tool:
|
||||||
|
- Breaking down complex problems into steps
|
||||||
|
- Planning and design with room for revision
|
||||||
|
- Analysis that might need course correction
|
||||||
|
- Problems where the full scope might not be clear initially
|
||||||
|
- Problems that require a multi-step solution
|
||||||
|
- Tasks that need to maintain context over multiple steps
|
||||||
|
- Situations where irrelevant information needs to be filtered out
|
||||||
|
|
||||||
|
Key features:
|
||||||
|
- You can adjust total_thoughts up or down as you progress
|
||||||
|
- You can question or revise previous thoughts
|
||||||
|
- You can add more thoughts even after reaching what seemed like the end
|
||||||
|
- You can express uncertainty and explore alternative approaches
|
||||||
|
- Not every thought needs to build linearly - you can branch or backtrack
|
||||||
|
- Generates a solution hypothesis
|
||||||
|
- Verifies the hypothesis based on the Chain of Thought steps
|
||||||
|
- Repeats the process until satisfied
|
||||||
|
- Provides a correct answer
|
||||||
|
|
||||||
|
Parameters explained:
|
||||||
|
- thought: Your current thinking step, which can include:
|
||||||
|
* Regular analytical steps
|
||||||
|
* Revisions of previous thoughts
|
||||||
|
* Questions about previous decisions
|
||||||
|
* Realizations about needing more analysis
|
||||||
|
* Changes in approach
|
||||||
|
* Hypothesis generation
|
||||||
|
* Hypothesis verification
|
||||||
|
- nextThoughtNeeded: True if you need more thinking, even if at what seemed like the end
|
||||||
|
- thoughtNumber: Current number in sequence (can go beyond initial total if needed)
|
||||||
|
- totalThoughts: Current estimate of thoughts needed (can be adjusted up/down)
|
||||||
|
- isRevision: A boolean indicating if this thought revises previous thinking
|
||||||
|
- revisesThought: If is_revision is true, which thought number is being reconsidered
|
||||||
|
- branchFromThought: If branching, which thought number is the branching point
|
||||||
|
- branchId: Identifier for the current branch (if any)
|
||||||
|
- needsMoreThoughts: If reaching end but realizing more thoughts needed
|
||||||
|
|
||||||
|
You should:
|
||||||
|
1. Start with an initial estimate of needed thoughts, but be ready to adjust
|
||||||
|
2. Feel free to question or revise previous thoughts
|
||||||
|
3. Don't hesitate to add more thoughts if needed, even at the "end"
|
||||||
|
4. Express uncertainty when present
|
||||||
|
5. Mark thoughts that revise previous thinking or branch into new paths
|
||||||
|
6. Ignore information that is irrelevant to the current step
|
||||||
|
7. Generate a solution hypothesis when appropriate
|
||||||
|
8. Verify the hypothesis based on the Chain of Thought steps
|
||||||
|
9. Repeat the process until satisfied with the solution
|
||||||
|
10. Provide a single, ideally correct answer as the final output
|
||||||
|
11. Only set nextThoughtNeeded to false when truly done and a satisfactory answer is reached`,
|
||||||
|
inputSchema: {
|
||||||
|
thought: z.string().describe("Your current thinking step"),
|
||||||
|
nextThoughtNeeded: z.boolean().describe("Whether another thought step is needed"),
|
||||||
|
thoughtNumber: z.number().int().min(1).describe("Current thought number (numeric value, e.g., 1, 2, 3)"),
|
||||||
|
totalThoughts: z.number().int().min(1).describe("Estimated total thoughts needed (numeric value, e.g., 5, 10)"),
|
||||||
|
isRevision: z.boolean().optional().describe("Whether this revises previous thinking"),
|
||||||
|
revisesThought: z.number().int().min(1).optional().describe("Which thought is being reconsidered"),
|
||||||
|
branchFromThought: z.number().int().min(1).optional().describe("Branching point thought number"),
|
||||||
|
branchId: z.string().optional().describe("Branch identifier"),
|
||||||
|
needsMoreThoughts: z.boolean().optional().describe("If more thoughts are needed")
|
||||||
|
},
|
||||||
|
outputSchema: {
|
||||||
|
thoughtNumber: z.number(),
|
||||||
|
totalThoughts: z.number(),
|
||||||
|
nextThoughtNeeded: z.boolean(),
|
||||||
|
branches: z.array(z.string()),
|
||||||
|
thoughtHistoryLength: z.number()
|
||||||
|
},
|
||||||
|
},
|
||||||
|
async (args) => {
|
||||||
|
const result = thinkingServer.processThought(args);
|
||||||
|
|
||||||
|
if (result.isError) {
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse the JSON response to get structured content
|
||||||
|
const parsedContent = JSON.parse(result.content[0].text);
|
||||||
|
|
||||||
|
return {
|
||||||
|
content: result.content,
|
||||||
|
structuredContent: parsedContent
|
||||||
|
};
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
const sseTransportsBySessionId = new Map<string, SSEServerTransport>();
|
||||||
|
|
||||||
|
function runServer() {
|
||||||
|
const port = Number(process.env.MCP_PORT ?? process.env.SSE_PORT ?? 3000);
|
||||||
|
|
||||||
|
const httpServer = http.createServer(async (req, res) => {
|
||||||
|
const url = new URL(req.url ?? "/", `http://${req.headers.host ?? "localhost"}`);
|
||||||
|
const pathname = url.pathname;
|
||||||
|
|
||||||
|
if (req.method === "GET" && (pathname === "/sse" || pathname === "/")) {
|
||||||
|
try {
|
||||||
|
const transport = new SSEServerTransport("/messages", res);
|
||||||
|
sseTransportsBySessionId.set(transport.sessionId, transport);
|
||||||
|
transport.onclose = () => {
|
||||||
|
sseTransportsBySessionId.delete(transport.sessionId);
|
||||||
|
};
|
||||||
|
const heartbeatInterval = setInterval(() => {
|
||||||
|
try {
|
||||||
|
if (!res.writableEnded) {
|
||||||
|
res.write(': heartbeat\n\n');
|
||||||
|
} else {
|
||||||
|
clearInterval(heartbeatInterval);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
clearInterval(heartbeatInterval);
|
||||||
|
}
|
||||||
|
}, 15_000);
|
||||||
|
res.on('close', () => clearInterval(heartbeatInterval));
|
||||||
|
await server.connect(transport);
|
||||||
|
console.error("Sequential Thinking MCP Server: new SSE client connected");
|
||||||
|
} catch (error) {
|
||||||
|
console.error("SSE connection error:", error);
|
||||||
|
if (!res.headersSent) {
|
||||||
|
res.writeHead(500).end("Internal server error");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (req.method === "POST" && pathname === "/messages") {
|
||||||
|
const sessionId = url.searchParams.get("sessionId");
|
||||||
|
if (!sessionId) {
|
||||||
|
res.writeHead(400).end("Missing sessionId query parameter");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const transport = sseTransportsBySessionId.get(sessionId);
|
||||||
|
if (!transport) {
|
||||||
|
res.writeHead(404).end("Unknown session");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
await transport.handlePostMessage(req, res);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
res.writeHead(404).end("Not found");
|
||||||
|
});
|
||||||
|
|
||||||
|
httpServer.listen(port, () => {
|
||||||
|
console.error(`Sequential Thinking MCP Server running on SSE at http://localhost:${port}`);
|
||||||
|
console.error(" GET /sse – open SSE stream (then POST to /messages?sessionId=...)");
|
||||||
|
console.error(" POST /messages?sessionId=<id> – send MCP messages");
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
runServer();
|
||||||
99
mcpServer/modules/sequentialthinking/lib.ts
Normal file
99
mcpServer/modules/sequentialthinking/lib.ts
Normal file
@@ -0,0 +1,99 @@
|
|||||||
|
import chalk from 'chalk';
|
||||||
|
|
||||||
|
export interface ThoughtData {
|
||||||
|
thought: string;
|
||||||
|
thoughtNumber: number;
|
||||||
|
totalThoughts: number;
|
||||||
|
isRevision?: boolean;
|
||||||
|
revisesThought?: number;
|
||||||
|
branchFromThought?: number;
|
||||||
|
branchId?: string;
|
||||||
|
needsMoreThoughts?: boolean;
|
||||||
|
nextThoughtNeeded: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
export class SequentialThinkingServer {
|
||||||
|
private thoughtHistory: ThoughtData[] = [];
|
||||||
|
private branches: Record<string, ThoughtData[]> = {};
|
||||||
|
private disableThoughtLogging: boolean;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
this.disableThoughtLogging = (process.env.DISABLE_THOUGHT_LOGGING || "").toLowerCase() === "true";
|
||||||
|
}
|
||||||
|
|
||||||
|
private formatThought(thoughtData: ThoughtData): string {
|
||||||
|
const { thoughtNumber, totalThoughts, thought, isRevision, revisesThought, branchFromThought, branchId } = thoughtData;
|
||||||
|
|
||||||
|
let prefix = '';
|
||||||
|
let context = '';
|
||||||
|
|
||||||
|
if (isRevision) {
|
||||||
|
prefix = chalk.yellow('🔄 Revision');
|
||||||
|
context = ` (revising thought ${revisesThought})`;
|
||||||
|
} else if (branchFromThought) {
|
||||||
|
prefix = chalk.green('🌿 Branch');
|
||||||
|
context = ` (from thought ${branchFromThought}, ID: ${branchId})`;
|
||||||
|
} else {
|
||||||
|
prefix = chalk.blue('💭 Thought');
|
||||||
|
context = '';
|
||||||
|
}
|
||||||
|
|
||||||
|
const header = `${prefix} ${thoughtNumber}/${totalThoughts}${context}`;
|
||||||
|
const border = '─'.repeat(Math.max(header.length, thought.length) + 4);
|
||||||
|
|
||||||
|
return `
|
||||||
|
┌${border}┐
|
||||||
|
│ ${header} │
|
||||||
|
├${border}┤
|
||||||
|
│ ${thought.padEnd(border.length - 2)} │
|
||||||
|
└${border}┘`;
|
||||||
|
}
|
||||||
|
|
||||||
|
public processThought(input: ThoughtData): { content: Array<{ type: "text"; text: string }>; isError?: boolean } {
|
||||||
|
try {
|
||||||
|
// Validation happens at the tool registration layer via Zod
|
||||||
|
// Adjust totalThoughts if thoughtNumber exceeds it
|
||||||
|
if (input.thoughtNumber > input.totalThoughts) {
|
||||||
|
input.totalThoughts = input.thoughtNumber;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.thoughtHistory.push(input);
|
||||||
|
|
||||||
|
if (input.branchFromThought && input.branchId) {
|
||||||
|
if (!this.branches[input.branchId]) {
|
||||||
|
this.branches[input.branchId] = [];
|
||||||
|
}
|
||||||
|
this.branches[input.branchId].push(input);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!this.disableThoughtLogging) {
|
||||||
|
const formattedThought = this.formatThought(input);
|
||||||
|
console.error(formattedThought);
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
content: [{
|
||||||
|
type: "text" as const,
|
||||||
|
text: JSON.stringify({
|
||||||
|
thoughtNumber: input.thoughtNumber,
|
||||||
|
totalThoughts: input.totalThoughts,
|
||||||
|
nextThoughtNeeded: input.nextThoughtNeeded,
|
||||||
|
branches: Object.keys(this.branches),
|
||||||
|
thoughtHistoryLength: this.thoughtHistory.length
|
||||||
|
}, null, 2)
|
||||||
|
}]
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
return {
|
||||||
|
content: [{
|
||||||
|
type: "text" as const,
|
||||||
|
text: JSON.stringify({
|
||||||
|
error: error instanceof Error ? error.message : String(error),
|
||||||
|
status: 'failed'
|
||||||
|
}, null, 2)
|
||||||
|
}],
|
||||||
|
isError: true
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
40
mcpServer/modules/sequentialthinking/package.json
Normal file
40
mcpServer/modules/sequentialthinking/package.json
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
{
|
||||||
|
"name": "@modelcontextprotocol/server-sequential-thinking",
|
||||||
|
"version": "0.6.2",
|
||||||
|
"description": "MCP server for sequential thinking and problem solving",
|
||||||
|
"license": "SEE LICENSE IN LICENSE",
|
||||||
|
"mcpName": "io.github.modelcontextprotocol/server-sequential-thinking",
|
||||||
|
"author": "Model Context Protocol a Series of LF Projects, LLC.",
|
||||||
|
"homepage": "https://modelcontextprotocol.io",
|
||||||
|
"bugs": "https://github.com/modelcontextprotocol/servers/issues",
|
||||||
|
"repository": {
|
||||||
|
"type": "git",
|
||||||
|
"url": "https://github.com/modelcontextprotocol/servers.git"
|
||||||
|
},
|
||||||
|
"type": "module",
|
||||||
|
"bin": {
|
||||||
|
"mcp-server-sequential-thinking": "dist/index.js"
|
||||||
|
},
|
||||||
|
"files": [
|
||||||
|
"dist"
|
||||||
|
],
|
||||||
|
"scripts": {
|
||||||
|
"build": "tsc && shx chmod +x dist/*.js",
|
||||||
|
"prepare": "npm run build",
|
||||||
|
"watch": "tsc --watch",
|
||||||
|
"test": "vitest run --coverage"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"@modelcontextprotocol/sdk": "^1.26.0",
|
||||||
|
"chalk": "^5.3.0",
|
||||||
|
"yargs": "^17.7.2"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/node": "^22",
|
||||||
|
"@types/yargs": "^17.0.32",
|
||||||
|
"@vitest/coverage-v8": "^2.1.8",
|
||||||
|
"shx": "^0.3.4",
|
||||||
|
"typescript": "^5.3.3",
|
||||||
|
"vitest": "^2.1.8"
|
||||||
|
}
|
||||||
|
}
|
||||||
15
mcpServer/modules/sequentialthinking/tsconfig.json
Normal file
15
mcpServer/modules/sequentialthinking/tsconfig.json
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"outDir": "./dist",
|
||||||
|
"rootDir": ".",
|
||||||
|
"target": "ES2022",
|
||||||
|
"lib": ["ES2022","DOM"],
|
||||||
|
"module": "NodeNext",
|
||||||
|
"moduleResolution": "NodeNext",
|
||||||
|
"esModuleInterop": true,
|
||||||
|
"allowSyntheticDefaultImports": true,
|
||||||
|
"types": ["node"]
|
||||||
|
},
|
||||||
|
"include": ["./**/*.ts"],
|
||||||
|
"exclude": ["**/*.test.ts", "vitest.config.ts"]
|
||||||
|
}
|
||||||
14
mcpServer/modules/sequentialthinking/vitest.config.ts
Normal file
14
mcpServer/modules/sequentialthinking/vitest.config.ts
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
import { defineConfig } from 'vitest/config';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
test: {
|
||||||
|
globals: true,
|
||||||
|
environment: 'node',
|
||||||
|
include: ['**/__tests__/**/*.test.ts'],
|
||||||
|
coverage: {
|
||||||
|
provider: 'v8',
|
||||||
|
include: ['**/*.ts'],
|
||||||
|
exclude: ['**/__tests__/**', '**/dist/**'],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
Reference in New Issue
Block a user