Go LLM gateway — one interface for Claude Code, Codex, Gemini CLI, Anthropic, OpenAI, Qwen, and vLLM.
-
Updated
Dec 1, 2025 - Go
Go LLM gateway — one interface for Claude Code, Codex, Gemini CLI, Anthropic, OpenAI, Qwen, and vLLM.
The first CLI tool to manage multiple AI coding sessions with real-time health monitoring and token tracking
Turnkey prompt-chaining template for Claude/OpenAI workflows. LangGraph-orchestrated sequential agents (Analysis→Processing→Synthesis) with validation gates. Observability-first: auto distributed tracing, token/cost tracking, quality metrics—zero boilerplate.
Autonomous multi-agent system combining GPT-4 orchestration with Claude Code CLI execution. Transform ideas into code through natural language with real-time tracking, GitHub integration, and intelligent project management.
Comprehensive cost & token usage analyzer for Claude Code with detailed breakdowns by model and token type. Uses ccusage CLI tool and LiteLLM pricing data. The raw token usage data is available in ${HOME}/.claude/projects/<project-name>/<conversation-id>.jsonl files.
Sleek Streamlit chat app for Google Gemini (Files API). Dark, gradient UI with model picker, usage dialog, file/image/audio/PDF attach & preview, chat history, image persistence, robust error handling, and token usage tracking. Supports streaming replies and modular backend via google-genai.
🚀 Manage and optimize AI coding sessions seamlessly with LLM Session Manager for enhanced collaboration and performance monitoring.
Session profiler and observability tool for AI coding assistants (Claude Code, Codex CLI, Gemini CLI). Real-time token tracking, MCP efficiency analysis, and cost optimization.
Add a description, image, and links to the token-tracking topic page so that developers can more easily learn about it.
To associate your repository with the token-tracking topic, visit your repo's landing page and select "manage topics."