๐Ÿž Advanced Debugging

When HelseCLI behaves unexpectedly, use these advanced techniques to diagnose the issue.

When HelseCLI behaves unexpectedly, use these advanced techniques to diagnose the issue.

๐Ÿ” Debug Mode

Enable verbose logging to see exactly what is happening under the hood:

HELSE_DEBUG=1 python helsecli.py code

This will:

  1. Start printing detailed logs to the console.
  2. Save a rich log file to logs/helsecli_debug.log.
  3. Show the actual JSON payloads being sent to LLM providers.

๐Ÿ“œ Audit Trail Analysis

The !steps command is your primary debugger for AI logic.

If an agent gets stuck:

  1. Run !steps.
  2. Look for the first "Failure" or "Error" status.
  3. Check the "Error Details" โ€” this often contains the exact shell error or Python traceback the agent encountered.

๐Ÿณ Sandbox Debugging

If the Docker sandbox is failing:

  1. Check if the image exists: docker images | grep helse-sandbox.
  2. Run a manual container to test connectivity: docker run -it helse-sandbox /bin/bash.
  3. Clear the sandbox cache: rm -rf .helse/sandbox_cache.

๐Ÿ“ก Provider Latency

If responses are slow or timing out:

  1. Use !m to switch to a different provider (e.g., from OpenAI to Gemini) to see if it's a provider-specific issue.
  2. Check your internet connection or any VPN/Proxy settings that might be interfering with API calls.

๐Ÿงช Testing Tools Individually

Each tool in src/tools/ can usually be run as a standalone script for testing.

# Example for testing search
python src/tools/search.py --test "HelseCLI documentation"

๐Ÿ“ˆ Monitoring Token Usage

HelseCLI tracks how many tokens each request uses. Check the bottom of the audit log or the debug console to see if you are approaching the context window limit of your selected model.

  • GPT-4o: Large (128k+)
  • Claude 3.5: Large (200k)
  • Gemini: Massive (1M+)