GhidrAssist · Getting Started
Install, configure, and run your first assisted reverse-engineering workflow in Ghidra.
Getting Started with GhidrAssist
This guide helps you install GhidrAssist, configure an LLM provider, and run your first analysis in Ghidra.
Prerequisites
Before installing GhidrAssist, ensure you have:
- Ghidra: Version 11.0 or higher
- Internet connection: For cloud providers or downloading local models
- Python (optional): For some local tooling, depending on your MCP server setup
Installation
GhidrAssist is installed as a Ghidra extension.
Step 1: Install the Extension
Option A: Extension Manager (Recommended)
- Download the GhidrAssist release ZIP
- Open Ghidra
- Go to File → Install Extensions
- Click the + button and select the ZIP
- Enable the extension and restart Ghidra
Option B: Manual Install
- Copy the release ZIP into:
Ghidra_Install/Extensions/Ghidra/
- Restart Ghidra
- Enable the extension in File → Install Extensions
Step 2: Enable the Plugin
- Open or create a project
- Launch CodeBrowser
- Go to File → Configure → Miscellaneous
- Check Enable GhidrAssist
Step 3: Open GhidrAssist
- In CodeBrowser, open Window → GhidrAssist
- The GhidrAssist panel appears with the tab interface
Initial Configuration
You need to configure at least one LLM provider.
Accessing Settings
- In the GhidrAssist panel, click the Settings tab
- The LLM Providers section appears at the top
Setting Up an LLM Provider
GhidrAssist supports multiple providers. Choose the one that fits your needs:
Option 1: Ollama (Local, Free, Private)
Ollama runs models locally on your machine.
Step 1: Install Ollama
# Linux/macOS
curl -fsSL https://ollama.ai/install.sh | sh
# Windows: Download from https://ollama.ai/download
Step 2: Pull a Model
# General purpose model
ollama pull llama3.1:8b
# Reasoning model (recommended for complex analysis)
ollama pull gpt-oss:20b
# Start the server
ollama serve
Step 3: Configure in GhidrAssist
- In Settings, click Add in LLM Providers
- Fill in:
- Name:
Ollama Local - Type:
Ollama - Model:
gpt-oss:20b - URL:
http://localhost:11434 - API Key: Leave empty
- Max Tokens:
16384
- Name:
- Click Save
- Click Test
Option 2: OpenAI Platform API
Use OpenAI models with a paid API key.
Step 1: Get an API Key
- Go to platform.openai.com
- Create an API key from the dashboard
Step 2: Configure in GhidrAssist
- Click Add in LLM Providers
- Fill in:
- Name:
OpenAI - Type:
OpenAI Platform API - Model:
gpt-5.2-codex - URL: Leave empty (default)
- API Key: Paste your API key
- Max Tokens:
20000
- Name:
- Click Save
- Click Test
Option 3: Anthropic Platform API
Use Claude models with a paid API key.
Step 1: Get an API Key
- Go to console.anthropic.com
- Create an API key
Step 2: Configure in GhidrAssist
- Click Add in LLM Providers
- Fill in:
- Name:
Anthropic Claude - Type:
Anthropic Platform API - Model:
claude-sonnet-4-5 - URL: Leave empty (default)
- API Key: Paste your API key
- Max Tokens:
20000
- Name:
- Click Save
- Click Test
Option 4: OAuth Providers (Claude Pro/Max or ChatGPT Pro/Plus)
If you have a Claude Pro/Max or ChatGPT Pro/Plus subscription, use OAuth instead of an API key.
Claude Pro/Max:
- Click Add in LLM Providers
- Select Type:
Anthropic OAuth - Enter Name and Model (e.g.,
claude-sonnet-4-5) - Click Authenticate
- A browser window opens for login
- After authorization, credentials are saved automatically
- Click Save
ChatGPT Pro/Plus:
- Click Add in LLM Providers
- Select Type:
OpenAI OAuth - Enter Name and Model (e.g.,
gpt-5.2-codex) - Click Authenticate
- A browser window opens for login
- After authorization, credentials are saved automatically
- Click Save
Setting the Active Provider
- Use the Active Provider dropdown at the bottom of the LLM Providers section
- Select the provider you want to use
Your First Analysis
Step 1: Load a Binary
- Open a binary in Ghidra
- Wait for auto-analysis to complete
Step 2: Navigate to a Function
- In the Functions window, click a function
- Or press G and enter an address
Step 3: Explain the Function
- Open the GhidrAssist panel
- Click the Explain tab
- Click Explain Function
- Wait for the explanation to stream in
Step 4: Ask a Question
- Switch to the Query tab
- Type a question, for example:
- "What does this function do?"
- "Are there any security concerns here?"
- "What functions does this call?"
- Click Submit
- Watch the response stream in
Next Steps
Explore these guides:
Troubleshooting
"Connection failed" when testing provider
- Ollama: Ensure
ollama serveis running - Cloud providers: Verify your API key is correct
- Network issues: Check firewall and proxy settings
No response from LLM
- Check Window → Console in Ghidra for errors
- Verify the model name is correct
- Ensure you have sufficient API credits
Plugin not appearing
- Restart Ghidra after installation
- Confirm the extension is enabled
- Ensure it is enabled in File → Configure → Miscellaneous
Slow responses
- Local models: Use a smaller model or a GPU
- Cloud models: Reasoning models are slower by design
- Large functions: Analyze smaller functions first