Smart Chunking
Automatically determines optimal chunk size based on file type. Handles files of any size efficiently.
Production-ready MCP server with smart chunking, navigation, and streaming capabilities
// Read a specific chunk of a large file
{
"tool": "read_large_file_chunk",
"arguments": {
"filePath": "/var/log/system.log",
"chunkIndex": 0,
"includeLineNumbers": true
}
}
// Search for patterns with context
{
"tool": "search_in_large_file",
"arguments": {
"filePath": "/var/log/error.log",
"pattern": "ERROR.*database",
"regex": true,
"contextBefore": 3,
"contextAfter": 3
}
}
// Navigate to specific line
{
"tool": "navigate_to_line",
"arguments": {
"filePath": "/code/app.ts",
"lineNumber": 1234,
"contextLines": 10
}
}Working with large files in AI applications presents unique challenges:
Large File MCP Server solves these problems with:
✅ Smart Chunking - Automatically optimizes chunk size based on file type ✅ Streaming Architecture - Process files of any size without memory issues ✅ Intelligent Caching - LRU cache with 80-90% hit rates for repeated access ✅ Powerful Search - Regex support with contextual results ✅ Type Safety - Full TypeScript support prevents runtime errors
| File Size | Operation | Time | Method |
|---|---|---|---|
| < 1MB | Read chunk | < 100ms | Direct read |
| 1-100MB | Search | < 500ms | Streaming |
| 100MB-1GB | Navigate | 1-3s | Streaming + cache |
| > 1GB | Stream | Progressive | AsyncGenerator |
The server intelligently detects and optimizes for:
Install via npm:
npm install -g @willianpinho/large-file-mcpOr use with npx:
npx @willianpinho/large-file-mcp