Skip to main content

What is OnCall

OnCall is a runtime-aware AI debugger that helps engineers understand and fix production issues faster by reading live logs, tracing failures across services, and surfacing the real root cause.

Featured Article

ai-debugging-execution-gap

I Can Write Code Now — So Why Does Debugging Still Eat My Week?

AI can generate code, tests, and refactors, but real-world debugging is still slow. The real bottleneck isn’t fixing bugs — it’s gathering evidence.

Browse by category

Articles

cover

Inside OnCall (Part 2): How Local Log Processing Supercharges Cloud AI Analysis

Why preprocessing logs locally creates cleaner signals and dramatically reduces LLM token usage.
cover

Context Bloat vs Token Hunger: How to Balance LLM Inputs

How to balance context size and token limits in LLM systems without degrading reasoning quality or increasing cost.
cover

The Hidden Cost of Context Bloat in AI Debugging—and How We Avoid It

Why excessive context hurts LLM debugging accuracy, and how minimal, signal-first context dramatically improves reasoning.
cover

Inside OnCall (Part 4): How We Repurposed LangChain Deep Research

How OnCall adapts LangChain’s Deep Research patterns to drive deeper, more reliable debugging-oriented LLM reasoning.
cover

Inside OnCall (Part 3): Git-Aware Debugging and the Importance of Knowing What Changed

How git diffs and time-travel file reads give LLMs the most powerful debugging context.
cover

Inside OnCall (Part 1): How We Built Fast Local Code Intelligence

How OnCall uses fast, local code intelligence primitives to dramatically improve LLM-powered debugging and reasoning.
cover

Why We Chose ripgrep Over grep for AI-Assisted Code Search

A practical, performance-driven comparison of ripgrep vs grep for AI-assisted code search workflows.
cover

Why AI Debugging Needs a Hybrid Architecture: Local Context, Cloud Reasoning

Why effective AI debugging requires local runtime introspection paired with cloud-based LLM reasoning.
cover

Why AI Debugging Should Start Locally: The Case for On-Source Context Collection

Why runtime-local context collection is essential for accurate, reliable AI-powered debugging.

The Philosophy

Why Logs > Prompts

The Manifesto. Copilots hallucinate because they lack context. OnCall bets on runtime signals over chat windows. Read why we process logs locally instead of relying on long prompts.

Community & Events

Beyond Prompts

Recaps and recordings from our developer meetups in Bangalore.

OnCall Discord

Join the Runtime Signals server to chat about distributed systems.

Ready to debug faster? OnCall reads your logs so you don’t have to. Get Early Access