Claude Code Remote Agents: No Local Machine Needed
Anthropic launched Remote Agents for Claude Code in April 2026 — run AI coding sessions on cloud infrastructure without keeping your laptop open.
April 2026 · AI News
Claude Managed Agents — Anthropic Takes Over Agent Infrastructure
Building AI agents is relatively easy. Running them in production is the hard part. Session management, failure recovery, orchestration — coordinating multiple agents toward a goal, like a project manager who never sleeps — all of that had to be built by hand. On April 8, 2026, Anthropic pushed that complexity behind an API with Claude Managed Agents.
This wasn't a surprise launch. Remote Control shipped in February. Remote Tasks followed on March 20. Managed Agents is the third piece — aimed squarely at production use cases and enterprise teams. The pattern is deliberate: ship the simple version first, gather real usage data, then build the infrastructure layer on top.
Early design partners include Notion, Rakuten, and Asana. Anthropic's blog claims "10x faster to production." That number isn't independently verified. The direction is clear, though: Anthropic is going after the agent infrastructure market directly. Here's what actually shipped and what it means.
· Launched: April 8, 2026 (public beta)
· Pricing: API token rates + $0.08 / active session-hour
· GA now: session persistence, checkpointing, scoped permissions, Console tracing
· Research preview: multi-agent orchestration, cross-session memory, Outcomes
· Early adopters: Notion, Rakuten, Asana
- The Three-Step Rollout
- Session Persistence and Checkpointing
- Scoped Permissions and Console Tracing
- Pricing: What You Actually Pay
- Remote Tasks vs. Managed Agents
- What's Still in Research Preview
- Multi-Agent Orchestration: The One Everyone Wants
- Getting Started
- Who Actually Needs This
- The Bigger Picture
- FAQ
The Three-Step Rollout
Anthropic didn't ship everything at once. February brought Remote Control — basic remote execution for single tasks. March 20 added Remote Tasks, which connected agents to GitHub repositories and let them read code, write fixes, and open pull requests without a local terminal. April's Managed Agents layer adds what production systems actually need: persistent sessions, checkpointing, and permission controls.
Think of it like building a house. Remote Control poured the foundation — proving that agents could run remotely at all. Remote Tasks framed the walls — adding real workflow integration with version control. Managed Agents is the roof: the durability and access controls that make the structure usable long-term in a real environment.
The pacing signals something about Anthropic's strategy. They shipped with real customers validating each layer. Notion, Rakuten, and Asana were reportedly involved before the April announcement. That's slower than a typical AI product launch. It's also more credible. By the time Managed Agents shipped, the underlying remote execution infrastructure had already handled real workloads.
Session Persistence and Checkpointing
Session persistence is the foundation of everything Managed Agents does. Without it, every agent call is stateless — the agent forgets everything the moment the request ends, like a goldfish that resets between cleanings. With session persistence, the agent maintains its full context across multiple interactions. A task that spans hours doesn't lose its memory when the connection drops or the next call comes in.
Checkpointing is the save-game feature for agent workflows. The agent writes its progress to durable storage at regular intervals. If the process crashes, times out, or hits a transient API error, the next run picks up from the last checkpoint — not from zero. Before checkpointing, assigning a 3-hour codebase audit to an agent was risky. A timeout halfway through could leave code in a broken intermediate state.
Together, these two features change what's actually feasible to delegate. You can assign tasks that realistically take hours: audit all API endpoints, review every authentication handler, generate documentation for an entire service. The economics of long tasks shift significantly when you're not betting everything on zero failures end-to-end.
Scoped Permissions and Console Tracing
Scoped permissions define what an agent is allowed to access at the infrastructure level. Think of it as a security clearance system — like a contractor who can enter the office floor but not the server room. You specify that an agent can read from your documentation bucket but cannot write to production databases. That boundary is enforced by the API, not just by instructions in the prompt.
The distinction matters. Prompt-level instructions ("don't touch the database") are helpful but not trustworthy for production use. A sufficiently complex task can lead an agent to interpret those instructions in unexpected ways. Scoped permissions at the infrastructure level are a hard boundary. The agent cannot cross it regardless of what its reasoning produces. That's what makes Managed Agents viable for enterprise deployments where a hallucinating agent could cause real damage.
Console tracing gives you a full timeline of every action the agent took. Every tool call, every decision branch, every retry — recorded and accessible after the fact. When an agent fails in production, you need to know exactly where it went wrong and why. Without tracing, debugging is guesswork. With it, post-mortems become tractable. This is the observability layer that agentic workflows have been missing.
Generally Available (GA): Session persistence · Checkpointing · Scoped permissions · Console tracing
Research Preview (separate access required): Multi-agent orchestration · Cross-session memory · Outcomes
No GA date has been announced for the preview features. Don't build production dependencies on them yet.
Pricing: What You Actually Pay
The pricing model has two components. You pay standard Claude API token rates — the same rates you already pay for direct API calls. On top of that, active sessions cost $0.08 per hour. That's the infrastructure cost for keeping session state alive while the agent works. The total bill for most tasks is dominated by token costs, not session fees.
The session fee becomes meaningful at scale. A 10-minute code review adds about $0.013 in session charges — negligible. A 3-hour codebase audit adds $0.24. Scale that to 50 concurrent 3-hour sessions running daily, and session fees add roughly $360 per month before token costs. That's not prohibitive, but it's worth factoring into a production budget.
Beta pricing can change. Anthropic has adjusted rates on other products after beta periods. The $0.08/hr figure is current as of the April 8 announcement, but no long-term pricing commitments were made. Check the official pricing page before committing to any production cost model built around this number.
| Task Duration | Session Fee | Typical Use Case |
|---|---|---|
| 10 minutes | ~$0.013 | Quick PR review, bug fix |
| 30 minutes | ~$0.04 | Feature implementation, test generation |
| 1 hour | $0.08 | Module refactor, security scan |
| 3 hours | $0.24 | Full codebase audit, doc generation |
| 8 hours | $0.64 | Day-long migration, large-scale review |
Remote Tasks vs. Managed Agents
Remote Tasks and Managed Agents solve different problems. Remote Tasks is for discrete, bounded jobs — "review this PR," "fix this bug," "generate tests for this module." One task, one run, done. No session state to manage, no session fee to pay. It's the right tool for the majority of developer workflows right now.
Managed Agents is for workflows that span time, require coordination between agents, or need to be embedded in a product. The upgrade path is clear: when your tasks routinely exceed 30 minutes, when a single agent isn't enough, or when you're building agents into something users interact with directly — that's when Remote Tasks hits its ceiling and Managed Agents starts making sense.
| Remote Tasks | Managed Agents | |
|---|---|---|
| Session persistence | No | Yes |
| Checkpointing | No | Yes |
| Multi-agent orchestration | No | Preview |
| Scoped permissions | No | Yes |
| Console tracing | No | Yes |
| GitHub integration | Yes | Yes |
| Session-hour fee | No | $0.08 / hr |
| Best for | Single tasks, quick fixes | Long jobs, production services |
What's Still in Research Preview
Three features are in research preview: multi-agent orchestration, cross-session memory, and Outcomes. They require separate access. No GA timeline has been announced. Anthropic described all three at launch, but building a production dependency on any of them right now is a mistake.
Cross-session memory lets agents accumulate knowledge across separate sessions. Think of it as long-term memory versus short-term memory — the agent in a new session can recall what it learned in previous ones. Without it, every session starts cold. With it, an agent that has reviewed your codebase a dozen times builds up a working model of it over time. That's a qualitative shift in what repeated-use agents can do.
Outcomes is the most conceptually significant feature in preview. Instead of following a prompt, the agent works toward a defined success criterion — "the test suite should pass," "no high-severity security findings should remain." The agent keeps working until the criterion is met or it gives up. That's goal-oriented execution, not instruction-following. The implementation details aren't public. What it means for reliability and cost isn't clear yet either.
Multi-Agent Orchestration: The One Everyone Wants
Multi-agent orchestration is what most developers are waiting for. Instead of one agent doing everything sequentially, multiple specialized agents work in parallel and hand off results. One agent reads the codebase. Another writes documentation. A third runs tests. A coordinator integrates the outputs. The limiting factor shifts from agent speed to orchestration overhead.
The performance implications are real. A full-stack feature that takes one agent 3 hours to complete might take four coordinating agents 45 minutes. Parallel execution at the agent level works the same way Promise.all works at the code level — you're running multiple things simultaneously instead of one at a time, and you wait only as long as the slowest piece. The savings compound on larger tasks.
This is still in research preview, and that matters. The coordination protocols, shared context mechanisms, and failure handling for multi-agent workflows are genuinely hard to get right. One agent failing mid-task affects the others. Context sharing between agents raises memory management questions. Anthropic hasn't published implementation details for what's in preview. Notion and Asana are presumably testing it. The rest of us wait.
Getting Started
Managed Agents is accessible via the Anthropic SDK. The basic pattern: create a session, send messages to that session by referencing the session ID, and the session persists between calls. Closing the session ends billing for that session-hour block. The workflow mirrors how you'd use a stateful conversation API — create once, interact multiple times, close when done.
The code below shows the conceptual pattern. Exact API signatures are subject to change during beta. Check the official docs before writing production code against any of these methods. The structure is illustrative, not a guaranteed contract.
import anthropic
client = anthropic.Anthropic()
# Create a session — context persists between calls
session = client.beta.agents.sessions.create(
model="claude-opus-4-6",
system="You are a senior code reviewer with read access to the repository.",
permissions={"file_system": ["read:src/", "write:docs/"]}
)
# First task — agent reads and analyzes
response = client.beta.agents.sessions.message(
session_id=session.id,
content="Audit all authentication handlers for security issues."
)
# Same session, full context retained — no re-explaining needed
report = client.beta.agents.sessions.message(
session_id=session.id,
content="Now write a security report based on what you found."
)
# Close when done — stops session-hour billing
client.beta.agents.sessions.close(session_id=session.id)
The code above shows the conceptual pattern, not verified production-ready syntax. Anthropic's SDK is in active development. Method names and parameters may differ from what's shown here. Check docs.anthropic.com/en/docs/agents before writing production code.
Who Actually Needs This
Solo developers doing occasional code tasks: Remote Tasks is still the right tool. It's simpler, cheaper, and handles the majority of single-agent workflows without any session overhead. Managed Agents starts making sense when tasks routinely run longer than 30 minutes, when you need agents coordinating in parallel, or when you're building a product with agents embedded in the user experience.
The Notion and Asana integrations signal where Anthropic is aiming. These aren't power users running one-off commands in a terminal. They're companies embedding agents into user-facing products — where agent behavior needs to be predictable, auditable, and controllable at scale. That's a fundamentally different requirement than a developer running an occasional code review.
Enterprise teams with compliance requirements benefit immediately from the GA features. Scoped permissions and console tracing address the two main blockers for enterprise adoption of agent tools: "can this agent access something it shouldn't?" and "what exactly did the agent do?" Both are now answerable. That's not a small thing.
| Situation | Recommended Tool | Reason |
|---|---|---|
| Quick PR review or bug fix | Remote Tasks | No session overhead needed |
| Task runs 30+ minutes | Managed Agents | Checkpointing prevents loss on failure |
| Agents embedded in a product | Managed Agents | Persistent sessions, permission controls |
| Enterprise with compliance needs | Managed Agents | Scoped permissions + audit trail |
| Solo developer, occasional use | Remote Tasks | Simpler, no session fee |
| Multiple agents coordinating | Managed Agents (preview) | Multi-agent orchestration in preview |
The Bigger Picture
Anthropic isn't the only company building agent infrastructure. OpenAI has its Agents SDK and operator/user permission models. Microsoft has Azure AI Agent Service. Google has Vertex AI Agent Builder. The agent infrastructure layer is becoming a competitive battleground. Every major AI company is making the same bet: that the next wave of revenue comes from running agents, not just serving models.
Anthropic's specific angle is vertical integration. The model, the execution environment, the permission system, and the observability tooling all come from the same company. That means fewer integration gaps and a cleaner debugging story when something goes wrong. It also means more vendor lock-in — a tradeoff that enterprise buyers evaluate carefully before committing. Using Claude through Managed Agents ties your agent architecture to Anthropic's roadmap in a way that using the plain API doesn't.
Managed Agents is the clearest signal yet that Anthropic views itself as an infrastructure company, not just a model provider. The GA release is solid. The research preview features — multi-agent orchestration and Outcomes especially — are the ones that will define whether this is a convenience layer or a genuine platform shift. Watch the preview access timeline. That's where the real answer is.
FAQ
Q. What is Claude Managed Agents?
A hosted execution environment for AI agents. Session management, checkpointing, and permission controls come as an API. You don't need to build your own state management or failure recovery infrastructure. Anthropic handles that layer. You write the agent logic and call the API.
Q. How much does it cost?
Standard Claude API token rates plus $0.08 per active session-hour. For most tasks under an hour, session fees are a minor line item. Token costs dominate the bill. For multi-hour tasks running at scale, session fees become worth tracking. Beta pricing may change — check the official pricing page.
Q. What's the difference from Remote Tasks?
Remote Tasks runs one bounded task against a repository. One task, one run, no session state. Managed Agents adds persistent sessions that survive across multiple calls, checkpointing that saves progress on long tasks, scoped permissions for access control, and console tracing for observability. It's the production infrastructure layer on top of remote execution.
Q. Is multi-agent support available now?
Not publicly. Multi-agent orchestration, cross-session memory, and Outcomes are in research preview. Separate access is required. No GA date has been announced. Design partners like Notion and Asana are presumably testing these features. Public availability is unknown.
Q. Do solo developers need this?
Probably not yet. Remote Tasks covers most single-agent use cases without session overhead or session fees. Managed Agents makes sense when tasks run for hours, when coordination between agents matters, or when you're building agents into a product that users rely on. If Remote Tasks handles your workflow, stick with it for now.
Managed Agents shipped four GA features and three research preview features. The GA features are production-ready today. The preview features are the ones that will define the platform's ceiling — but they require separate access and carry no shipping commitment. Build on what's GA. Watch what's in preview. That's the right posture for April 2026.
The pricing is reasonable for long-running agent tasks. The permission model is the right approach for production deployments. And console tracing — a full audit trail of every action the agent took — addresses something the industry has needed since agents started running in production. Useful infrastructure, not a breakthrough. Useful infrastructure compounds.
· Anthropic Blog — Claude Managed Agents announcement
· Anthropic Docs — Agents
· Anthropic Pricing
Related Posts
Pricing and availability based on Anthropic's April 8, 2026 announcement. Beta details subject to change without notice. Research preview features require separate access approval and have no confirmed GA timeline.
Last updated: April 2026 · GoCodeLab