What is OnCall
OnCall is a runtime-aware AI debugger that helps engineers understand and fix production issues faster by reading live logs, tracing failures across services, and surfacing the real root cause.Featured Article

I Can Write Code Now — So Why Does Debugging Still Eat My Week?
AI can generate code, tests, and refactors, but real-world debugging is still slow. The real bottleneck isn’t fixing bugs — it’s gathering evidence.
Browse by category
Architecture
Deep dives into the system design and infrastructure behind OnCall.
Code Intelligence
How OnCall parses, understands, and navigates complex codebases using static analysis and ASTs.
Runtime Signals
Capturing and interpreting real-time execution data, error logs, and dynamic system states.
LLM Reasoning
Deep dives into prompt engineering, context management, and the decision-making logic of our AI agents.
Debugging Cases
Real-world case studies of gnarly bugs and how autonomous agents resolved them.
Engineering Lessons
Retrospectives, best practices, and hard-earned wisdom from building the OnCall platform.
Security & Safety
Our approach to sandboxing, data privacy, and ensuring safe autonomous code execution.
Future of Debugging
The roadmap ahead: moving beyond error fixing to proactive system healing and optimization.
Articles

Inside OnCall (Part 2): How Local Log Processing Supercharges Cloud AI Analysis
Why preprocessing logs locally creates cleaner signals and dramatically reduces LLM token usage.

Context Bloat vs Token Hunger: How to Balance LLM Inputs
How to balance context size and token limits in LLM systems without degrading reasoning quality or increasing cost.

The Hidden Cost of Context Bloat in AI Debugging—and How We Avoid It
Why excessive context hurts LLM debugging accuracy, and how minimal, signal-first context dramatically improves reasoning.

Inside OnCall (Part 4): How We Repurposed LangChain Deep Research
How OnCall adapts LangChain’s Deep Research patterns to drive deeper, more reliable debugging-oriented LLM reasoning.

Inside OnCall (Part 3): Git-Aware Debugging and the Importance of Knowing What Changed
How git diffs and time-travel file reads give LLMs the most powerful debugging context.

Inside OnCall (Part 1): How We Built Fast Local Code Intelligence
How OnCall uses fast, local code intelligence primitives to dramatically improve LLM-powered debugging and reasoning.

Why We Chose ripgrep Over grep for AI-Assisted Code Search
A practical, performance-driven comparison of ripgrep vs grep for AI-assisted code search workflows.

Why AI Debugging Needs a Hybrid Architecture: Local Context, Cloud Reasoning
Why effective AI debugging requires local runtime introspection paired with cloud-based LLM reasoning.

Why AI Debugging Should Start Locally: The Case for On-Source Context Collection
Why runtime-local context collection is essential for accurate, reliable AI-powered debugging.
The Philosophy
Why Logs > Prompts
The Manifesto. Copilots hallucinate because they lack context. OnCall bets on runtime signals over chat windows. Read why we process logs locally instead of relying on long prompts.
Community & Events
Beyond Prompts
Recaps and recordings from our developer meetups in Bangalore.
OnCall Discord
Join the Runtime Signals server to chat about distributed systems.
Ready to debug faster? OnCall reads your logs so you don’t have to. Get Early Access