AI Code Sherlock
Documentation
AI Code Sherlock is a free, open-source desktop IDE assistant for automated code analysis, surgical patching, and autonomous AI workflow execution. This guide covers everything from installation to building complex multi-agent pipelines.
- 01 Installation & Requirements
- 02 Code Editor & AI Chat
- 03 Auto-Improve Pipeline
- 04 Script Runner (15+ languages)
- 05 Agent Constructor — Visual Workflow Editor
- 06 Node Types Reference
- 07 Browser Automation Nodes
- 08 Skills & Skill Registry
- 09 Multi-Threaded Project Execution
- 10 Step-by-Step Debugger
- 11 Keyboard Shortcuts
- 12 FAQ
⬇ Installation
Requirements
Install Steps
ollama pull deepseek-coder-v2. No API key needed.
▶ First Run
- 1Open a project folder — File → Open Folder, or drag a folder onto the file tree on the left.
- 2Configure an AI model — Settings → Models. Add an Ollama URL (
http://localhost:11434) or an OpenAI/Gemini API key. - 3Open a file — Click any file in the tree. The editor opens with syntax highlighting.
- 4Ask the AI — Type a question or task in the chat panel on the right. The AI reads your open file as context.
- 5Apply a patch — If the AI suggests code changes, click Apply in the Patches panel. Changes are backed up automatically.
📝 Code Editor
The code editor is the center panel. It supports syntax highlighting for 15+ languages, line numbers, and scroll-linked context compression for large files.
- →Context compression: Large files are summarized using AST analysis before being sent to the AI — reducing 120k tokens to ~4k without losing important structure.
- →Version control: Every patch is backed up. Restore any previous version via History → Version History.
- →Error Map: Confirmed bug→fix pairs are stored persistently. The AI references the Error Map to avoid repeating known mistakes.
🤖 AI Chat
The right-center chat panel is your main interface to the AI. It has three tabs: Chat, Logs, and Context.
Chat Tab
Type a message and press Enter or click Send. The AI sees your open file and any context files you have added. You can ask it to fix bugs, explain code, generate new functions, or write tests.
Context Tab
Add extra files as read-only context — the AI sees them but never modifies them. Useful for config files, documentation, or reference code.
Consensus Mode
Enable Consensus in the toolbar to query multiple AI models simultaneously. The best patch is selected based on agreement between models. Ideal for critical fixes.
🩹 Patches & SEARCH/REPLACE
Sherlock never rewrites entire files. It generates surgical SEARCH/REPLACE patches — only the exact blocks that need changing. The right panel shows pending patches with a diff view.
.sherlock_versions/.⚙️ Auto-Improve Pipeline
The pipeline runs a loop: generate patch → apply → run tests → if failed, analyze and re-patch. It continues until tests pass or the max iteration limit is reached.
8 Built-in Strategies
Each strategy tells the AI how to approach improvement:
▶️ Script Runner
Run scripts in 15+ languages directly from the IDE. Output streams live to the Logs tab.
Supported Languages
🐍 Python 🟨 JavaScript 📘 TypeScript 💎 Ruby 🐘 PHP 🐹 Go 🌙 Lua 🔩 Perl 📊 R 🔬 Julia 🖥 Shell 📋 Batch 🔵 PowerShell
Interactive stdin
The script runner supports live stdin input. While a script is running, type into the stdin input box and press Enter to send input — no race conditions, no closed-pipe errors. Queue-based pumping ensures all input is delivered in order.
🧩 Visual Workflow Editor — Introduction
The Agent Constructor is a separate module that lets you build AI automation workflows visually. Connect nodes on an infinite canvas to create pipelines that combine AI agents, code execution, browser automation, and control flow logic.
Open it from the main toolbar or Ctrl+Shift+W. It opens as a new window alongside the IDE.
🖼 Canvas & Navigation
Moving Around
| Action | How |
|---|---|
| Zoom in/out | Ctrl + scroll wheel |
| Pan horizontally | Shift + scroll wheel |
| Pan freely | Hold middle mouse button + drag |
| Select multiple nodes | Click + drag rubber-band on empty canvas |
| Select single node | Left-click the node |
| Move node(s) | Drag selected node(s) |
| Auto-scroll while dragging | Move node near canvas edge — canvas scrolls automatically |
Minimap
The minimap (bottom-right corner) shows a bird's-eye view of the entire workflow. Click anywhere on the minimap to jump to that location. The highlighted rectangle represents the current viewport.
📦 Node Types
Nodes are divided into three categories:
🧠 AI Agent Nodes
These nodes call an AI model to perform a task. Each has configurable model, system prompt, temperature, max tokens, retry count, and skills.
💻 Code Writer 🔍 Code Reviewer 🧪 Tester 🏗️ Planner 🎨 Image Gen 👁️ Image Analyst 📁 File Manager ▶️ Script Runner ✅ Verifier 🎯 Orchestrator 🩹 Patcher 🤖 Custom
⚙️ Snippet / Logic Nodes
These nodes execute deterministic logic without AI — control flow, data manipulation, I/O.
context["var"].📌 Note Nodes
Notes and Project Start markers. Notes are visual-only and never executed. Project Start marks the workflow entry point.
🔗 Connections (Edges)
Connect two nodes by hovering over the source node's output port and dragging to the target node's input port. A bezier curve appears.
Edge Conditions
| Condition | Description |
|---|---|
ALWAYS | Follow this edge every time (default). |
ON_SUCCESS | Follow only if the previous node succeeded. |
ON_FAILURE | Follow only if the previous node failed — use for fallback/recovery paths. |
ON_CONDITION | Follow if a custom Python expression evaluates to True. |
Routing Modes
Each AI node has an Orchestration Mode setting:
- →Sequential — Follow the first matching edge (default).
- →Conditional — Evaluate all
conditional_branchesexpressions; follow the first true branch. - →LLM Router — Ask an AI model which next node to pick, given the current result and available options.
⚙️ Node Settings
Double-click any node (or right-click → Properties) to open the settings panel. Key settings for AI nodes:
🔧 Skills & Skill Registry
Skills are reusable capability definitions that inject instructions into an agent's system prompt. Assign multiple skills to one node — they are merged automatically.
Built-in Skills
💻 Code Generation 🔍 Code Review 🔧 Patching 🧪 Testing 🏗️ Architecture 🐛 Debugging 📝 Documentation 👁️ Image Analysis 📁 File Operations 📊 Data Analysis
Creating Custom Skills
- 1Open Tools → Skill Registry.
- 2Click + New Skill. Enter a name, description, category, and system prompt snippet.
- 3Optionally add example input/output to help the AI understand expected behavior.
- 4Save. The skill is now available in all node settings panels.
Auto-Load Skills
Enable Auto-Load Skills on a node to have the engine use an AI model to automatically select the most relevant skills from the registry for that node's task description at runtime.
🌐 Browser Automation Nodes
The browser suite lets you control a web browser as part of a workflow — click, type, screenshot, and more.
🚀 Running Workflows
- 1Set the entry node — right-click a node → "Set as Entry". Or place a Project Start node and connect it first.
- 2Click Run (▶ button in the toolbar). The engine starts executing from the entry node.
- 3Watch logs — each node logs its status (⏭ starting, ✅ success, ❌ error) in the execution log panel.
- 4Stop or Pause — use ⏸ Pause to freeze execution at the current node, or ⏹ Stop to abort.
🐛 Step-by-Step Debugger
The debugger lets you execute workflows one node at a time for testing and inspection.
- 1Click Debug (🐛) in the Constructor toolbar.
- 2The current node is highlighted on the canvas.
- 3Click Step to advance one node. The Variables panel shows the execution context after each step.
- 4Set a Breakpoint on a node (right-click → Enable Breakpoint) to auto-pause when run in normal mode.
- 5The Path panel shows the complete execution path taken so far.
📁 Projects
Each project consists of a workflow (.json) and a root folder. Projects can be saved, loaded, duplicated, and exported.
- →Save: Ctrl+S saves the current workflow to a JSON file.
- →Load: File → Open Workflow, or drag a
.jsonfile onto the canvas. - →Export: File → Export → standalone Python script that runs the workflow without the IDE.
⚡ Multi-Threaded Execution
The Project Execution Manager lets you run multiple workflow projects simultaneously in separate threads.
📊 Project Dashboard
The dashboard displays all registered projects in a card grid. Each card shows:
- →Status badge: Running 🟢, Paused 🟡, Stopped ⚪, Error 🔴
- →Iteration counter: How many times the workflow has run.
- →Controls: Start / Pause / Stop / Reset per-project.
- →Last result: Short summary of the last execution's outcome.
⌨️ Keyboard Shortcuts
Global
| Shortcut | Action |
|---|---|
| Ctrl+Z | Undo |
| Ctrl+Shift+Z / Ctrl+Y | Redo |
| Ctrl+S | Save |
| Ctrl+C | Copy selected node(s) |
| Ctrl+X | Cut selected node(s) |
| Ctrl+V | Paste node(s) |
| Delete | Delete selected node(s) / edge |
| Ctrl+A | Select all nodes |
| Escape | Deselect / cancel current action |
Canvas Navigation
| Shortcut | Action |
|---|---|
| Ctrl + scroll | Zoom in / out |
| Shift + scroll | Scroll horizontally |
| Middle mouse button + drag | Pan canvas freely |
| Ctrl+0 | Reset zoom to 100% |
| Ctrl+Shift+F | Fit all nodes in view |
Execution
| Shortcut | Action |
|---|---|
| F5 | Run workflow |
| F6 | Debug (step mode) |
| F10 | Step one node (in debug mode) |
| F9 | Toggle breakpoint on selected node |
| Shift+F5 | Stop execution |
🧠 AI Model Configuration
Add models in Settings → Models. Each model definition includes:
- →Provider: Ollama, OpenAI, Gemini, or any OpenAI-compatible API (Groq, Together, etc.).
- →Model ID: e.g.
deepseek-coder-v2,gpt-4o,gemini-1.5-pro. - →API Key / URL: Base URL for Ollama or API key for cloud providers.
- →Role: Assign a model as the default text, vision, reasoning, or fast model.
❓ FAQ
Does it work offline?
Yes. Install Ollama and pull a local model. Set the base URL to http://localhost:11434 in Settings → Models. No internet or API key required.
Can I use multiple AI providers simultaneously?
Yes. Define multiple model entries in Settings. Assign different models to different nodes in the Agent Constructor. Use Consensus Mode in the IDE to query several models at once.
Where are my patches and backups stored?
All versions are stored in .sherlock_versions/ inside your project folder. Access them via History → Version History.
How do I add a model that uses OpenAI-compatible API?
In Settings → Models, select source type "OpenAI Compatible", enter the base URL (e.g. https://api.groq.com/openai/v1), and your API key. Enter the model name as provided by the API.
The AI keeps making the same mistake. What do I do?
Open Error Map (Tools → Error Map). Add the error pattern and the correct fix. The AI will reference the Error Map on future runs and avoid repeating confirmed mistakes.
My workflow has a cycle — is that supported?
Yes. Intentional cycles (e.g. ScriptRunner → Patcher → ScriptRunner) are supported and useful for self-healing loops. The runtime engine does not block cycles — only the Infinite Loop Guard in the pipeline will catch unintentional infinite loops.
How do I share a workflow with someone?
Save the workflow (Ctrl+S) and share the resulting .json file. The recipient opens it in their Agent Constructor via File → Open Workflow.