← All news
AI News15 April 2026

Claude Code Routines: Scheduled, API, and Webhook Automations Explained

By Stephen Grindley

Anthropic has introduced Routines, a new feature in Claude Code that lets developers configure automated tasks that run on a schedule, respond to an API call, or fire when specific GitHub events occur. Routines run on Anthropic's web infrastructure, so they do not depend on a developer's laptop being open or connected.

This matters because it moves Claude Code from a tool you interact with manually into something that can operate continuously in the background. For engineering teams, this is the difference between an AI coding assistant and an AI agent that participates in your development workflow around the clock. Routines handle the kind of repetitive, structured work that teams know they should automate but rarely get to: triaging issues, reviewing pull requests against internal checklists, verifying deployments, keeping documentation current.

A note before we continue: Claude Code Routines are not available on the Owlpen platform. Routines are a developer-facing feature within Anthropic's Claude Code product, accessed through the CLI or the Claude Code web interface. The remainder of this article explains how Routines work, what the three trigger types do, and where they fit for engineering teams evaluating AI-assisted automation.

What Routines are

A Routine is an automation you configure once with a prompt, a repository, and any connectors you want it to use. Once saved, it executes without further input. Each Routine gets its own endpoint and authentication token. Results are returned as session URLs that you can inspect to see exactly what the Routine did, which provides a full audit trail.

The concept is straightforward: rather than opening Claude Code and typing instructions each time you need something done, you define the task once and let it run on its own. The underlying model, repository access, and tool use capabilities are the same as an interactive Claude Code session. The difference is that no human needs to be present to start it.

Three trigger types

Routines come in three forms, each suited to a different kind of work. The distinction matters because it determines when the Routine runs, how it receives context, and whether it maintains state between executions.

Scheduled Routines

These execute on a defined cadence: hourly, nightly, weekly, or whatever interval you set. They are well suited to maintenance and housekeeping work. Anthropic highlights two patterns in particular. The first is backlog management, where a nightly Routine triages new issues, applies labels, assigns owners, and posts a summary to Slack. The second is documentation drift detection, where a weekly Routine scans recently merged pull requests for API changes and opens update PRs for any documentation that has fallen out of date.

API Routines

These are triggered by an HTTP POST request to a unique endpoint. They accept a payload, run the configured task, and return a session URL. This makes them composable with existing systems. If your deployment pipeline, monitoring stack, or internal tooling can make an HTTP call, it can trigger a Routine. Anthropic suggests three use cases: post-deployment smoke checks that scan error logs and notify a release channel, alert triage that correlates errors with recent deployments and drafts fixes before paging on-call, and feedback resolution that opens a code session with the relevant context when a user reports a problem.

GitHub Webhook Routines

These subscribe to GitHub repository events and create a new session for each matching event. The key feature is persistence: for pull requests, the Routine maintains a single session across the lifecycle of that PR. It receives updates when new comments are posted or CI checks fail, and can respond to each one in context. This is different from a one-shot code review tool because the Routine remembers the full conversation. Anthropic describes two patterns: automatic library porting, where changes to a Python SDK are automatically ported to a parallel Go SDK, and custom code review that runs your team's own checklist across security and performance before a human reviewer looks at the PR.

Session persistence

GitHub Webhook Routines maintain a single session per PR. This means the Routine can address follow-up comments, respond to CI failures, and build on its own earlier work without losing context. For teams that want automated review to feel like a persistent collaborator rather than a stateless linter, this is the most significant design choice in the feature.

Usage limits and access

Routines are currently in research preview. They are available to Claude Code users on Pro, Max, Team, and Enterprise plans with Claude Code on the web enabled. Each Routine execution consumes usage from your subscription in the same way as an interactive session.

Daily execution limits vary by plan. Pro subscribers can run 5 Routines per day, Max subscribers get 15, and Team and Enterprise plans allow 25. Additional executions beyond these limits draw from the extra usage allocation on your plan. Routines can be configured through the /schedule command in the Claude Code CLI or through the web interface at claude.ai/code.

What this changes

Before Routines, using Claude Code for recurring tasks meant managing your own cron jobs, keeping a laptop running, or building custom infrastructure around the API. Routines bundle the execution environment, repository access, and connector integrations into a managed service. The shift is from Claude Code as a tool you use to Claude Code as an agentic workflow that runs alongside your team.

For engineering teams, the practical implication is that a class of work that was previously too tedious to automate (because the orchestration overhead exceeded the task itself) now becomes viable. Writing a custom bot to triage GitHub issues requires maintaining code, infrastructure, and credentials. A Routine requires a prompt and a schedule. The trade-off is that you are delegating this work to a general-purpose AI model rather than deterministic code, which means the output will vary and needs to be treated accordingly.

Limitations worth noting

The feature is in research preview, which means capabilities and limits may change. The daily execution caps are relatively low, particularly on the Pro plan, which limits Routines to five executions per day. For teams with active repositories generating dozens of pull requests daily, this may not be enough to cover every PR with a webhook Routine.

Routines execute with the same capabilities as an interactive Claude Code session. That is both the strength and the constraint. They can read and write code, run commands, and use connected tools, but they are bounded by the model's context window, reasoning ability, and the connectors you configure. Complex multi-step tasks that require human judgement at intermediate stages are better handled by a human-in-the-loop workflow than by a fully autonomous Routine.

There is also no granular permission model for Routines yet. A Routine has the same access as the user who created it. For organisations that need to scope permissions tightly (limiting what an automated process can push to, which branches it can modify, or which external services it can call) this is a gap that will need addressing before Routines are suitable for production use at scale.

Webhook scope

GitHub Webhook Routines currently support pull request events only. Anthropic has indicated that support for additional event types is planned but has not provided a timeline. Teams looking to trigger Routines from issue creation, release publishing, or other GitHub events will need to use the API trigger type and wire the connection themselves for now.

Where this fits

Routines are best understood as Anthropic's move to position Claude Code as infrastructure rather than just a developer tool. The pattern of scheduled, event-driven, and API-triggered AI automation is familiar from traditional CI/CD and orchestration systems, but with a general-purpose language model doing the work instead of deterministic scripts. The potential is significant. The practical value today depends on your team's tolerance for non-deterministic automation and the maturity of your review processes for AI-generated output.

For teams already using Claude Code interactively, Routines are a natural next step. For teams evaluating whether to adopt AI-assisted development workflows, Routines raise the stakes: this is no longer about a developer chatting with an AI in their editor, it is about an AI participating in your software delivery pipeline as an autonomous actor.

If you would like to discuss how AI-driven automation fits into your engineering workflows, whether through agentic tooling, the Owlpen platform, or a standalone advisory engagement, contact us at enquiries@coaleypeak.co.uk.

Disclaimer. This article is published by Coaley Peak Ltd for general informational purposes only. The views expressed are those of the author, Stephen Grindley, and do not constitute legal, regulatory, financial, or technical advice. Nothing in this article should be relied upon when making procurement, investment, compliance, or technology decisions. References to third-party products, platforms, and companies are for informational purposes only and do not constitute endorsement. Readers should seek independent professional advice appropriate to their specific circumstances. Information was accurate to the best of the author's knowledge at the date of publication. Coaley Peak Ltd and Stephen Grindley accept no liability for any loss or damage arising from reliance on the contents of this article.