T-Lang compresses your prompts server-side before they hit the LLM. Your AI agent sets it up in 2 commands. You just click login once.
Tell your AI agent to run these commands. You only need to click the login link once.
npx tlang-cli auth login
# → Opens: https://tlang-eu5.pages.dev/auth/device?code=ABC123
# → You log in once → CLI auto-receives your keynpx tlang-cli provider add openai --key sk-your-key
# Built-in: openai | gemini | grok | deepseek | groqnpx tlang-cli chat "Summarize this article about quantum computing" --model gpt-4o-mini
# Output (JSON):
# {
# "content": "Quantum computing uses qubits...",
# "compression": { "original_tokens": 45, "compressed_tokens": 12, "saved_tokens": 33, "rate": "73.3%" }
# }
#
# stderr: T-Lang: 45 -> 12 tokens (saved 33, -73.3%)Compression stays on the server. Your API keys stay on your machine.
Sends prompts to T-Lang Worker for compression. Zero dependencies — Node.js 18+ only.
Compresses your prompt using T-Lang DSL. Returns compressed messages. Never sees your Provider key.
CLI calls OpenAI/Gemini/Grok directly with your local key. Fewer tokens = lower bill.
T-Lang only compresses your prompt. Your OpenAI/Gemini/Grok API key goes directly from your CLI to the provider. We never see it.
Designed for AI coding agents. Two commands to install. One browser click to authorize. Your agent handles the rest.
Automatically skips code blocks, short messages, and system prompts. Zero quality loss on supported patterns.
Start free. Upgrade when it pays for itself.
No credit card required
Pays for itself after ~$25 in API savings
Free tier exceeds daily limit? Your API calls still work — just without compression savings.