Troubleshooting & FAQ
Common issues, diagnostics, and answers to frequently asked questions about LibreFang.
Table of Contents
- Quick Diagnostics
- Installation Issues
- Configuration Issues
- LLM Provider Issues
- Channel Issues
- Agent Issues
- API Issues
- Desktop App Issues
- Performance
- FAQ
Quick Diagnostics
Run the built-in diagnostic tool:
librefang doctor
This checks:
- Configuration file exists and is valid TOML
- API keys are set in environment
- Database is accessible
- Daemon status (running or not)
- Port availability
- Tool dependencies (Python, signal-cli, etc.)
Check Daemon Status
librefang status
Check Health via API
curl http://127.0.0.1:4200/api/health
curl http://127.0.0.1:4200/api/health/detail # Requires auth
View Logs
LibreFang uses tracing for structured logging. Set the log level via environment:
RUST_LOG=info librefang start # Default
RUST_LOG=debug librefang start # Verbose
RUST_LOG=librefang=debug librefang start # Only LibreFang debug, deps at info
Installation Issues
cargo install fails with compilation errors
Cause: Rust toolchain too old or missing system dependencies.
Fix:
rustup update stable
rustup default stable
rustc --version # Need 1.75+
On Linux, you may also need:
# Debian/Ubuntu
sudo apt install pkg-config libssl-dev libsqlite3-dev
# Fedora
sudo dnf install openssl-devel sqlite-devel
librefang command not found after install
Fix: Ensure ~/.cargo/bin is in your PATH:
export PATH="$HOME/.cargo/bin:$PATH"
# Add to ~/.bashrc or ~/.zshrc to persist
Docker container won't start
Common causes:
- No API key provided:
docker run -e GROQ_API_KEY=... ghcr.io/librefang/librefang - Port already in use: change the port mapping
-p 3001:4200 - Permission denied on volume mount: check directory permissions
Configuration Issues
"Config file not found"
Fix: Run librefang init to create the default config:
librefang init
This creates ~/.librefang/config.toml with sensible defaults.
"Missing API key" warnings on start
Cause: No LLM provider API key found in environment.
Fix: Set at least one provider key:
export GROQ_API_KEY="gsk_..." # Groq (free tier available)
# OR
export ANTHROPIC_API_KEY="sk-ant-..."
# OR
export OPENAI_API_KEY="sk-..."
Add to your shell profile to persist across sessions.
Config validation errors
Run validation manually:
librefang config show
Common issues:
- Malformed TOML syntax (use a TOML validator)
- Invalid port numbers (must be 1-65535)
- Missing required fields in channel configs
"Port already in use"
Fix: Change the port in config or kill the existing process:
# Change API port
# In config.toml:
# [api]
# listen_addr = "127.0.0.1:3001"
# Or find and kill the process using the port
# Linux/macOS:
lsof -i :4200
# Windows:
netstat -aon | findstr :4200
LLM Provider Issues
"Authentication failed" / 401 errors
Causes:
- API key not set or incorrect
- API key expired or revoked
- Wrong env var name
Fix: Verify your key:
# Check if the env var is set
echo $GROQ_API_KEY
# Test the provider
curl http://127.0.0.1:4200/api/providers/groq/test -X POST
"Rate limited" / 429 errors
Cause: Too many requests to the LLM provider.
Fix:
- The driver automatically retries with exponential backoff
- Reduce
max_llm_tokens_per_hourin agent capabilities - Switch to a provider with higher rate limits
- Use multiple providers with model routing
Slow responses
Possible causes:
- Provider API latency (try Groq for fast inference)
- Large context window (use
/compactto shrink session) - Complex tool chains (check iteration count in response)
Fix: Use per-agent model overrides to use faster models for simple agents:
[model]
provider = "groq"
model = "llama-3.1-8b-instant" # Fast, small model
"Model not found"
Fix: Check available models:
curl http://127.0.0.1:4200/api/models
Or use an alias:
[model]
model = "llama" # Alias for llama-3.3-70b-versatile
See the full alias list:
curl http://127.0.0.1:4200/api/models/aliases
Ollama / local models not connecting
Fix: Ensure the local server is running:
# Ollama
ollama serve # Default: http://localhost:11434
# vLLM
python -m vllm.entrypoints.openai.api_server --model ...
# LM Studio
# Start from the LM Studio UI, enable API server
Channel Issues
Telegram bot not responding
Checklist:
- Bot token is correct:
echo $TELEGRAM_BOT_TOKEN - Bot has been started (send
/startin Telegram) - If
allowed_usersis set, your Telegram user ID is in the list - Check logs for "Telegram adapter" messages
Discord bot offline
Checklist:
- Bot token is correct
- Message Content Intent is enabled in Discord Developer Portal
- Bot has been invited to the server with correct permissions
- Check Gateway connection in logs
Slack bot not receiving messages
Checklist:
- Both
SLACK_BOT_TOKEN(xoxb-) andSLACK_APP_TOKEN(xapp-) are set - Socket Mode is enabled in the Slack app settings
- Bot has been added to the channels it should monitor
- Required scopes:
chat:write,app_mentions:read,im:history,im:read,im:write
Webhook-based channels (WhatsApp, LINE, Viber, etc.)
Checklist:
- Your server is publicly accessible (or use a tunnel like ngrok)
- Webhook URL is correctly configured in the platform dashboard
- Webhook port is open and not blocked by firewall
- Verify token matches between config and platform dashboard
"Channel adapter failed to start"
Common causes:
- Missing or invalid token
- Port already in use (for webhook-based channels)
- Network connectivity issues
Check logs for the specific error:
RUST_LOG=librefang_channels=debug librefang start
Agent Issues
Agent stuck in a loop
Cause: The agent is repeatedly calling the same tool with the same parameters.
Automatic protection: LibreFang has a built-in loop guard:
- Warn at 3 identical tool calls
- Block at 5 identical tool calls
- Circuit breaker at 30 total blocked calls (stops the agent)
Manual fix: Cancel the agent's current run:
curl -X POST http://127.0.0.1:4200/api/agents/{id}/stop
Or via chat command: /stop
Agent running out of context
Cause: Conversation history is too long for the model's context window.
Fix: Compact the session:
curl -X POST http://127.0.0.1:4200/api/agents/{id}/session/compact
Or via chat command: /compact
Auto-compaction is enabled by default when the session reaches the threshold (configurable in [compaction]).
Agent not using tools
Cause: Tools not granted in the agent's capabilities.
Fix: Check the agent's manifest:
[capabilities]
tools = ["file_read", "web_fetch", "shell_exec"] # Must list each tool
# OR
# tools = ["*"] # Grant all tools (use with caution)
"Permission denied" errors in agent responses
Cause: The agent is trying to use a tool or access a resource not in its capabilities.
Fix: Add the required capability to the agent manifest. Common ones:
tools = [...]for tool accessnetwork = ["*"]for network accessmemory_write = ["self.*"]for memory writesshell = ["*"]for shell commands (use with caution)
Agent spawning fails
Check:
- TOML manifest is valid:
librefang agent spawn --dry-run manifest.toml - LLM provider is configured and has a valid key
- Model specified in manifest exists in the catalog
API Issues
401 Unauthorized
Cause: API key required but not provided.
Fix: Include the Bearer token:
curl -H "Authorization: Bearer your-api-key" http://127.0.0.1:4200/api/agents
429 Too Many Requests
Cause: GCRA rate limiter triggered.
Fix: Wait for the Retry-After period, or increase rate limits in config:
[api]
rate_limit_per_second = 20 # Increase if needed
CORS errors from browser
Cause: Trying to access API from a different origin.
Fix: Add your origin to CORS config:
[api]
cors_origins = ["http://localhost:5173", "https://your-app.com"]
WebSocket disconnects
Possible causes:
- Idle timeout (send periodic pings)
- Network interruption (reconnect automatically)
- Agent crashed (check logs)
Client-side fix: Implement reconnection logic with exponential backoff.
OpenAI-compatible API not working with my tool
Checklist:
- Use
POST /v1/chat/completions(not/api/agents/{id}/message) - Set the model to
librefang:agent-name(e.g.,librefang:coder) - Streaming: set
"stream": truefor SSE responses - Images: use
image_urlwithdata:image/png;base64,...format
Desktop App Issues
App won't start
Checklist:
- Only one instance can run at a time (single-instance enforcement)
- Check if the daemon is already running on the same ports
- Try deleting
~/.librefang/daemon.jsonand restarting
White/blank screen in app
Cause: The embedded API server hasn't started yet.
Fix: Wait a few seconds. If persistent, check logs for server startup errors.
System tray icon missing
Platform-specific:
- Linux: Requires a system tray (e.g.,
libappindicatoron GNOME) - macOS: Should work out of the box
- Windows: Check notification area settings, may need to show hidden icons
Performance
High memory usage
Tips:
- Reduce the number of concurrent agents
- Use session compaction for long-running agents
- Use smaller models (Llama 8B instead of 70B for simple tasks)
- Clear old sessions:
DELETE /api/sessions/{id}
Slow startup
Normal startup: <200ms for the kernel, ~1-2s with channel adapters.
If slower:
- Check database size (
~/.librefang/data/librefang.db) - Reduce the number of enabled channels
- Check network connectivity (MCP server connections happen at boot)
High CPU usage
Possible causes:
- WASM sandbox execution (fuel-limited, should self-terminate)
- Multiple agents running simultaneously
- Channel adapters reconnecting (exponential backoff)
FAQ
How do I switch the default LLM provider?
Edit ~/.librefang/config.toml:
[default_model]
provider = "groq"
model = "llama-3.3-70b-versatile"
api_key_env = "GROQ_API_KEY"
Can I use multiple providers at the same time?
Yes. Each agent can use a different provider via its manifest [model] section. The kernel creates a dedicated driver per unique provider configuration.
How do I add a new channel?
- Add the channel config to
~/.librefang/config.tomlunder[channels] - Set the required environment variables (tokens, secrets)
- Restart the daemon
How do I update LibreFang?
# From source
cd librefang && git pull && cargo install --path crates/librefang-cli
# Docker
docker pull ghcr.io/librefang/librefang:latest
Can agents talk to each other?
Yes. Agents can use the agent_send, agent_spawn, agent_find, and agent_list tools to communicate. The orchestrator template is specifically designed for multi-agent delegation.
Is my data sent to the cloud?
Only LLM API calls go to the provider's servers. All agent data, memory, sessions, and configuration are stored locally in SQLite (~/.librefang/data/librefang.db). The OFP wire protocol uses HMAC-SHA256 mutual authentication for P2P communication.
How do I back up my data?
Back up these files:
~/.librefang/config.toml(configuration)~/.librefang/data/librefang.db(all agent data, memory, sessions)~/.librefang/skills/(installed skills)
How do I reset everything?
rm -rf ~/.librefang
librefang init # Start fresh
Can I run LibreFang without an internet connection?
Yes, if you use a local LLM provider:
- Ollama:
ollama serve+ollama pull llama3.2 - vLLM: Self-hosted model server
- LM Studio: GUI-based local model runner
Set the provider in config:
[default_model]
provider = "ollama"
model = "llama3.2"
What's the difference between LibreFang and OpenClaw?
| Aspect | LibreFang | OpenClaw |
|---|---|---|
| Language | Rust | Python |
| Channels | 40 | 38 |
| Skills | 60 | 57 |
| Providers | 20 | 3 |
| Security | 16 systems | Config-based |
| Binary size | ~30 MB | ~200 MB |
| Startup | <200 ms | ~3 s |
LibreFang can import OpenClaw configs: librefang migrate --from openclaw
How do I report a bug or request a feature?
- Bugs: Open an issue on GitHub
- Security: See SECURITY.md for responsible disclosure
- Features: Open a GitHub discussion or PR
What are the system requirements?
| Resource | Minimum | Recommended |
|---|---|---|
| RAM | 128 MB | 512 MB |
| Disk | 50 MB (binary) | 500 MB (with data) |
| CPU | Any x86_64/ARM64 | 2+ cores |
| OS | Linux, macOS, Windows | Any |
| Rust | 1.75+ (build only) | Latest stable |
How do I enable debug logging for a specific crate?
RUST_LOG=librefang_runtime=debug,librefang_channels=info librefang start
Can I use LibreFang as a library?
Yes. Each crate is independently usable:
[dependencies]
librefang-runtime = { path = "crates/librefang-runtime" }
librefang-memory = { path = "crates/librefang-memory" }
The librefang-kernel crate assembles everything, but you can use individual crates for custom integrations.