BinAssist · Getting Started
Install, configure, and start working inside BinAssist.
Getting Started with BinAssist
This guide will help you install BinAssist, configure an LLM provider, and run your first analysis.
Prerequisites
Before installing BinAssist, ensure you have:
- Binary Ninja: Version 5000 or higher
- Python: Python 3.8+ (included with Binary Ninja)
- Internet connection: For cloud providers or downloading local models
Installation
Step 0: Running Windows? Read this first
BinAssist works on Windows, but the MCP SDK has some dependencies that require manual effort to install properly. Please refer to: BinAssist on Windows
Step 1: Install the Plugin
BinAssist can be installed from the Binary Ninja Plugin Manager or manually:
Option A: Plugin Manager (Recommended)
- Open Binary Ninja
- Go to Edit > Preferences > Plugin Manager
- Search for "BinAssist"
- Click Install
- Restart Binary Ninja
Option B: Manual Installation
- Download or clone the BinAssist repository
- Copy the
BinAssistfolder to your Binary Ninja plugins directory:- Linux:
~/.binaryninja/plugins/ - macOS:
~/Library/Application Support/Binary Ninja/plugins/ - Windows:
%APPDATA%\Binary Ninja\plugins\
- Linux:
- Restart Binary Ninja
Step 2: Install Dependencies
Open a terminal in the BinAssist plugin directory and run:
pip install -r requirements.txt
This installs the required Python packages:
openai- OpenAI and compatible API clientanthropic- Anthropic Claude API clienthttpx- HTTP client for API callsmcp- Model Context Protocol clientwhoosh- Full-text search for RAG
Initial Configuration
After installation, you need to configure at least one LLM provider.
Opening BinAssist
- Load any binary in Binary Ninja
- Look for the robot icon in the sidebar (left side - below the Sidekick icon)
- Click the icon to open the BinAssist panel
![]()
Accessing Settings
- In the BinAssist panel, click the Settings tab (last tab)
- You'll see the LLM Providers section at the top
Setting Up an LLM Provider
BinAssist supports multiple LLM providers. Choose the one that best fits your needs:
Option 1: Ollama (Local, Free, Private)
Ollama runs models locally on your machine, ensuring privacy and avoiding API costs.
Step 1: Install Ollama
# Linux/macOS
curl -fsSL https://ollama.ai/install.sh | sh
# Windows: Download from https://ollama.ai/download
Step 2: Pull a Model
# General purpose model
ollama pull llama3.1:8b
# Or a reasoning model (recommended for complex analysis)
ollama pull gpt-oss:20b
# Start the server
ollama serve
Step 3: Configure in BinAssist
- In the Settings tab, click Add in the LLM Providers section
- Fill in the fields:
- Name:
Ollama Local - Type: Select
Ollama - Model:
llama3.1:8b(or your chosen model) - URL:
http://localhost:11434 - API Key: Leave empty
- Max Tokens:
8192
- Name:
- Click Save
- Click Test to verify the connection
Option 2: OpenAI Platform API
Use OpenAI's models with a paid API key.
Step 1: Get an API Key
- Go to platform.openai.com
- Sign up or log in
- Navigate to API Keys
- Create a new API key
- Copy the key (you won't be able to see it again)
Step 2: Configure in BinAssist
- Click Add in the LLM Providers section
- Fill in the fields:
- Name:
OpenAI - Type: Select
OpenAI Platform API - Model:
gpt-5.2-codex - URL: Leave empty (uses default)
- API Key: Paste your API key
- Max Tokens:
20000
- Name:
- Click Save
- Click Test to verify
Option 3: Anthropic Platform API
Use Anthropic's Claude models with a paid API key.
Step 1: Get an API Key
- Go to console.anthropic.com
- Sign up or log in
- Navigate to API Keys
- Create a new API key
- Copy the key
Step 2: Configure in BinAssist
- Click Add in the LLM Providers section
- Fill in the fields:
- Name:
Anthropic Claude - Type: Select
Anthropic Platform API - Model:
claude-sonnet-4-5(orclaude-opus-4-5) - URL: Leave empty (uses default)
- API Key: Paste your API key
- Max Tokens:
20000
- Name:
- Click Save
- Click Test to verify
Option 4: OAuth Providers (Claude Pro/Max or ChatGPT Pro/Plus)
If you have a Claude Pro/Max or ChatGPT Pro/Plus subscription, you can use OAuth authentication instead of an API key.
For Claude Pro/Max:
- Click Add in the LLM Providers section
- Select Type:
Anthropic OAuth - Fill in a Name and Model (e.g.,
claude-sonnet-4-5) - Click Authenticate
- A browser window will open for you to sign in to your Anthropic account
- After authorization, the credentials will be saved automatically
- Click Save
For ChatGPT Pro/Plus:
- Click Add in the LLM Providers section
- Select Type:
OpenAI OAuth - Fill in a Name and Model (e.g.,
gpt-5.2-codex) - Click Authenticate
- A browser window will open for you to sign in to your OpenAI account
- After authorization, the credentials will be saved automatically
- Click Save
Setting the Active Provider
After adding providers, select the one you want to use:
- In the Active Provider dropdown at the bottom of the LLM Providers section
- Select your configured provider
- This provider will be used for all BinAssist operations
Your First Analysis
Now that BinAssist is configured, let's run your first analysis.
Step 1: Load a Binary
- Open a binary file in Binary Ninja (File > Open)
- Wait for the initial analysis to complete
Step 2: Navigate to a Function
- In the Functions list (left panel), click on any function
- Or use Go to Address (G key) to jump to a specific location
Step 3: Explain the Function
- Open the BinAssist sidebar (robot icon)
- Click the Explain tab
- Click the Explain Function button
- Wait for the LLM response to stream in
You should see:
- A detailed explanation of what the function does
- Security analysis panel with risk assessment
- Activity profile and detected API patterns
Step 4: Ask a Question
- Switch to the Query tab
- Type a question in the input field, for example:
- "What does this function do?"
- "Are there any security concerns here?"
- "What functions does this call?"
- Press Enter or click Send
- Watch the response stream in
Next Steps
Now that you have BinAssist working, explore these features:
- Explain Workflow: Learn to build context across your analysis
- Query Workflow: Master interactive queries and the ReAct agent
- Semantic Graph Workflow: Build a knowledge graph of your binary
- Settings Reference: Configure advanced options
Troubleshooting
"Connection failed" when testing provider
- Ollama: Ensure
ollama serveis running - Cloud providers: Verify your API key is correct
- Network issues: Check firewall and proxy settings
No response from LLM
- Check the Binary Ninja log (View > Log) for error messages
- Verify the model name is correct for your provider
- Ensure you have sufficient API credits (for paid providers)
Plugin not appearing in sidebar
- Restart Binary Ninja after installation
- Check that all dependencies are installed
- Verify the plugin is in the correct directory
Slow responses
- Local models: Consider using a smaller model or getting a better GPU
- Cloud providers: This is normal for reasoning models (o1, Claude with extended thinking)
- Large functions: Try analyzing smaller functions first
For additional help, check the GitHub Issues page.