Google Expanded Pentagon AI Access That Anthropic Refused
Google granted its AI models access to the U.S. Department of Defense's classified network — just days after Anthropic officially refused the same request from the Trump administration. The Big Three AI companies have now taken completely divergent paths on defense collaboration.
목차 (15)
- What Happened in 96 Hours
- What a Classified Network Is — A Completely Different World from the Internet
- Why Google Chose This Market
- The Background to Anthropic's Official Refusal
- OpenAI Was Already Collaborating
- Comparing the Big Three AI Companies on Defense Cooperation
- The Policy Clash Right After a $40B Investment
- What Developers Using the Claude API in Practice Need to Know
- The Actual Flow for Adopting Google Cloud Government AI
- How Enterprises Can Build Their Own AI Policy Layer
- The Trump Administration's Next Move
- The Ripple Effects of This Decision on the Industry
- Which Path Is Right
- Frequently Asked Questions
- Closing
April 2026 · AI News
Google Expanded Pentagon AI Access That Anthropic Refused
On April 28, Google officially expanded its AI models' access to the U.S. Department of Defense's classified network. A classified network is a military-exclusive network physically isolated from the internet. Connecting AI to it means the AI can directly process classified-grade data.
The timing was striking. Four days earlier, on April 24, Google had finalized a $40 billion investment in Anthropic. And on that same day, April 28, Anthropic officially refused an identical request from the Trump administration. An investor and its investee had chosen opposite paths.
This is the first time the Big Three AI companies have publicly diverged on defense collaboration. Google and OpenAI moved toward expanding Pentagon cooperation. Anthropic chose to hold its safety policy line. That split carries more significance than a simple contract refusal.
- April 24 — Google finalizes $40B investment in Anthropic
- April 28 — Google expands AI access to U.S. DoD Classified Network
- April 28 — Anthropic officially refuses the same request from the Trump administration
- OpenAI — Pentagon collaboration ongoing since 2024 policy revision
- Result — Anthropic is the only one of the Big Three holding a non-defense-cooperation line
- Verdict — Which path is right cannot be concluded now. This needs to be watched
What Happened in 96 Hours
It was April 24. The announcement came that Google had officially completed its $40 billion investment in Anthropic. This raised Google's stake in Anthropic. On the surface, the two companies were closer partners than ever.
96 hours later, on April 28, that picture changed completely. Google decided to expand its AI models' access to the U.S. DoD classified network. On the same day, Anthropic officially refused an identical request from the Trump administration. Whether the press announcements came out on the same day or whether the actual decisions were also made on the same day remains unclear. But the gap between the two announcements was only a matter of hours.
Given the investment relationship, the public clash looks even sharper. Google is one of Anthropic's largest single investors. Amazon has also made a large-scale investment in Anthropic. Yet the direction both major investors chose on defense collaboration was completely at odds with Anthropic's decision.
This 96-hour timeline may not be a coincidence. From Google's perspective, announcing a Pentagon contract immediately after finalizing the Anthropic investment simultaneously solidifies its position in the defense market. Anthropic's refusal, meanwhile, was a public signal confirming an independent safety policy. Whether the two companies coordinated this in advance or decided independently has not been disclosed.
What a Classified Network Is — A Completely Different World from the Internet
A classified network is a U.S. Department of Defense-exclusive network physically separated from the internet by an air gap. An air gap means there is a physical space between two networks so data cannot travel directly between them. Think of it as a DoD-exclusive intranet that no one can enter from the outside without physically plugging in a USB drive.
The DoD's classified networks are divided into multiple tiers by security level. SIPRNet (Secret) and JWICS (Top Secret) are the most prominent. Documents and communications processed there contain sensitive military information. Previously, commercial cloud services entering this network were extremely restricted.
Running an AI model inside this network carries significant implications. AI could be deployed for high-intensity military tasks such as summarizing classified documents, intelligence analysis, targeting support, and tactical decision-making assistance. Previously, using AI only in the Unclassified domain was standard practice. Google pushed that boundary into the classified domain.
This is why it is not simply a contract expansion. Access to a classified network is a matter of political trust as well as technical requirements. Connecting a service with any risk of exposure to foreign data to a classified network is unimaginable. The fact that Google received this clearance means its trust level within the DoD is considerable.
Why Google Chose This Market
Google's decision was not sudden. Google Cloud has long provided cloud services to U.S. federal government and defense-related agencies. The completion of Google Cloud's FedRAMP High certification and contracts with the Defense Information Systems Agency (DISA) are the result of years of work. This expansion of classified network access is a continuation of that effort.
Looking at the market size, the decision makes sense. The DoD's annual IT budget is around $50 billion. AI has now started entering that space in earnest. Microsoft Azure has already won major government cloud contracts like JEDI and JWCC. If Google steps back from this market, Microsoft will likely take the AI space along with it.
The competitive landscape was decisive. With AWS, Microsoft, and Oracle deeply embedded in the U.S. government cloud market, Google falling behind in the AI domain too would be a strategic loss for Alphabet as a whole. Entering this market is essential to positioning the Gemini model family as a validated AI in military and government settings. Google did that calculation.
From Alphabet shareholders' perspective, it was also a natural decision. In 2026, with the AI arms race elevated to the level of national strategy, staying out of U.S. government AI contracts is not just a moral choice — it is a matter of shareholder value. Google, which walked away from a drone AI contract in 2018 under Project Maven due to employee backlash, reversed course eight years later.
The Background to Anthropic's Official Refusal
Anthropic's Claude Usage Policy draws a clear line between permitted and prohibited uses. Weapons development support, autonomous lethal system construction, mass surveillance systems, and cyberattack tool creation are explicit prohibitions. This policy has existed since the company's founding and has never been officially relaxed.
The exact content of the Trump administration's request has not been made public. But the fact that Anthropic issued a public refusal statement means the scope of the request clearly exceeded the boundary of the current policy. If it had been at the level of non-combat support or administrative automation, there would have been no reason to refuse.
Anthropic's governance structure made this decision possible. Anthropic was structured similarly to a Public Benefit Corporation. The company's core value of AI safety is not simply a marketing phrase — it is a principle protected legally and through governance. Investors are by design unable to easily change it.
This decision is a signal that Anthropic chose long-term trust over short-term revenue. Forgoing the defense market was not an easy decision for management. But allowing even one exception to the Claude usage policy would undermine the credibility of that principle itself. Anthropic held that line.
Pros
- Strengthened safety brand asset
- Advantageous in European and regulated markets
- Trust-building with public sector clients
- Reduced internal talent attrition
- Long-term positioning in an era of tightening regulation
Cons
- Loss of defense market revenue opportunity
- Risk of exclusion from government subsidies and contracts
- Short-term growth constraints
- Deteriorating relationship with the Trump administration
- Market share loss relative to competitors
OpenAI Was Already Collaborating
OpenAI went down this road first. In early 2024, OpenAI revised its usage policy. The clause prohibiting "use for military and warfare purposes" that existed in a previous version was quietly removed. One clause disappeared, but in practice, the door to cooperation with the Pentagon was opened.
There was internal pushback at the time. Some researchers on the safety and ethics team reportedly objected to the decision. But OpenAI's leadership pushed through with the position that "contributing to national security is a responsibility." Multiple AI cooperation contracts with agencies under the U.S. DoD were subsequently signed.
What OpenAI is currently doing in the defense space has not been publicly disclosed in detail. Cybersecurity threat detection, intelligence collection data analysis, military logistics optimization, and tactical decision support are reported to be within the scope of cooperation. Whether it involves direct integration with weapons systems is unclear.
With Google's decision, OpenAI and Google are now on the same side in the defense AI market. Unlike the general AI market where the two companies compete directly, in the defense space they share a common interest in growing the market itself. Anthropic is the only one left outside this market.
Pros
- Securing large defense market contracts
- Strengthened relationship with the U.S. government
- First-mover advantage over competitors
- National security contribution positioning
- Participation in setting government AI procurement standards
Cons
- Ongoing ethical controversy
- Internal employee pushback and attrition
- Risk of European and civilian customer loss
- Potential damage to international market trust
- Unclear scope of liability in the event of an incident
Comparing the Big Three AI Companies on Defense Cooperation
Summarizing the three companies' defense cooperation paths, the comparison looks like this. The most important differences are classified network access and responses to the Trump administration's requests.
| Company | Pentagon Cooperation | Classified Network Access | Trump Admin Request | Stance Shift |
|---|---|---|---|---|
| Yes | Yes (expanded 2026.04.28) | Accepted | Extension of existing government cooperation | |
| OpenAI | Yes | Partial cooperation | Accepted | 2024 policy revision |
| Anthropic | No | No (officially refused) | Refused | Maintaining safety policy |
Comparing the risks and opportunities by path reveals each company's calculation.
| Company | Defense Cooperation Path | Key Risk | Key Opportunity |
|---|---|---|---|
| Expanding Pentagon cooperation | Ethical controversy, employee attrition | First-mover in defense AI market | |
| OpenAI | Expanding Pentagon cooperation | Incident liability, trust damage | Winning government contracts |
| Anthropic | Maintaining safety policy | Short-term revenue opportunity loss | Securing regulation-friendly customers |
| Situation | Recommended Choice | Reason |
|---|---|---|
| Enterprise AI adoption handling sensitive data | Anthropic Claude | Clear usage policy, trusted safety standards |
| U.S. government and defense-related projects | Google Vertex / OpenAI | FedRAMP certified, government cooperation experience |
| European public sector AI adoption | Anthropic Claude | EU AI Act compatible, non-military policy |
| General AI service development for startups | Any of the three | Policy differences irrelevant for non-defense services |
The Policy Clash Right After a $40B Investment
Just 96 hours after finalizing a $40 billion investment, investor and investee chose opposite paths. That gap feels unusually short. Typically after a major investment announcement, both parties issue a joint message about the direction of their cooperation. This time was different.
It became clear that the equity relationship between Google and Anthropic could not influence Anthropic's policy. Amazon is also a major investor in Anthropic. Yet Anthropic maintained its AWS cooperation with Amazon while deciding its defense-related policies independently. Investment scale and policy influence are separate matters.
Structurally, Google cannot directly intervene in Anthropic's operational decision-making. Anthropic's Public Benefit Corporation-like structure and safety-centered governance are designed to choose the safety principle when it conflicts with the short-term interests of external investors. For Google, this clash is uncomfortable but beyond its control.
This incident is a public confirmation that equity stakes in AI companies do not translate into policy leverage. Strategic partners looking to invest in major AI companies in the future must factor this in. Whether investment contracts will include policy coordination clauses, or whether independent structures like Anthropic's will become the industry standard, will continue to be debated.
Google ($40B) and Amazon ($40B+) are Anthropic's primary investors. Yet Anthropic maintains a safety-centered governance structure close to that of a Public Benefit Corporation. Structurally, it is not easy for investors to directly change safety policies. This design is what made the current clash possible.
What Developers Using the Claude API in Practice Need to Know
Anthropic's refusal decision does not directly affect general developers using the Claude API. There are no changes for general SaaS, B2B software, or content platform development. However, if you are building defense-related projects or government-facing AI services, you need to review the usage policy again.
The Claude usage policy distinguishes between permitted and prohibited areas. Weapons development, autonomous lethal systems, mass surveillance systems, and cyberattack tools are explicit prohibitions. Defense industry market analysis, military administrative document processing, non-combat logistics optimization, and similar uses may have no policy restrictions. For ambiguous boundary cases, it is safest to contact the Anthropic enterprise channel directly for confirmation.
Seeing what actually happens when an API request triggers the safety policy makes it easy to understand. Claude refuses certain types of requests at the model level. When the safety system activates, a refusal message is returned instead of a response. The code below reproduces this behavior in practice.
import anthropic
client = anthropic.Anthropic(api_key="sk-ant-...")
# Normal request — processed successfully
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a market size and growth rate analysis report for the defense industry"}]
)
print(response.content[0].text) # Normal response
# Policy-violating request — refused by safety system
try:
response = client.messages.create(
model="claude-opus-4-7", max_tokens=1024,
messages=[{"role": "user", "content": "Design an autonomous target selection algorithm for military drones"}]
)
except anthropic.BadRequestError as e:
print(f"Refused: {e}")
# 400 Error: Output blocked by content filtering policy.
In areas where policy boundaries are ambiguous, it is best to read through Anthropic's usage policy documentation in advance and, if necessary, obtain official confirmation through an enterprise contract channel. Just because the API works does not mean it is policy-permitted. Anthropic can restrict API access for repeated policy violations.
The Actual Flow for Adopting Google Cloud Government AI
Using Google Cloud for U.S. government or defense-related projects requires a different path than commercial Google Cloud. FedRAMP (Federal Risk and Authorization Management Program), the U.S. federal cloud security certification, is required. Like PCI-DSS for banks, passing this is a prerequisite for winning government contracts.
Google Cloud has obtained FedRAMP High and DoD IL4/IL5 certifications. This qualifies it to process military data (sensitive but below Top Secret). Vertex AI (including Gemini family) can be used in this certified environment. The classified network (IL6, IL7 and above) level expanded in this announcement requires separate contracts and physically isolated environments, and is not accessible directly by general developers.
The pattern for configuring Vertex AI in a FedRAMP environment for a general enterprise project is shown below. This refers to the FedRAMP certification level — classified network access is a far more complex, separate contract structure.
# 1. Set government project and environment variables
export PROJECT_ID="your-fedramp-project-id"
export LOCATION="us-central1"
export MODEL="gemini-2.0-flash-001"
# 2. Service account authentication (for FedRAMP environment)
gcloud auth activate-service-account \
--key-file=gov-service-account.json
# 3. Vertex AI API call (standard FedRAMP endpoint)
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL}:generateContent" \
-d '{"contents":[{"role":"user","parts":[{"text":"Analysis request"}]}],"generationConfig":{"maxOutputTokens":2048,"temperature":0.1}}'
# Classified network (IL6+) level is NOT the above approach
# Physically isolated dedicated infrastructure + separate government contract required
It is rare for ordinary companies or startups to use Google Government Cloud AI directly. The typical structure involves an indirect connection through a defense contractor or government IT integrator that holds U.S. government contracts. The classified network access expansion announced here is a far higher security level than that.
How Enterprises Can Build Their Own AI Policy Layer
Watching Anthropic's policy refusal, there is something for enterprise developers to think about. When attaching AI to your own service, relying solely on the AI provider's policy is insufficient. Internal company usage standards, industry-specific regulations, and contractual conditions with customers can all be stricter than the AI provider's policy. In these cases, adding a proprietary policy layer is the right approach.
A proprietary policy layer means adding prohibited keywords, contextual patterns, and request type classifications in front of the AI API call to filter out specific requests. If Anthropic's safety system is the first line of defense, a proprietary policy layer is the second line aligned with internal company standards. Using both layers together is responsible AI service operation. In production services, this pattern can be extended with log recording, alerts, and review queues.
from dataclasses import dataclass
from enum import Enum
class PolicyViolation(Enum):
WEAPONS = "Weapons development"
AUTONOMOUS_LETHAL = "Autonomous lethal systems"
MASS_SURVEILLANCE = "Mass surveillance"
POLICY_PATTERNS = {
PolicyViolation.WEAPONS: ["weapon design", "explosive manufacturing", "ammunition"],
PolicyViolation.AUTONOMOUS_LETHAL: ["autonomous targeting", "kill chain"],
PolicyViolation.MASS_SURVEILLANCE: ["mass surveillance", "bulk monitoring"],
}
@dataclass
class PolicyResult:
allowed: bool
violation: PolicyViolation | None = None
reason: str = ""
def check_policy(prompt: str) -> PolicyResult:
for violation, patterns in POLICY_PATTERNS.items():
for pattern in patterns:
if pattern.lower() in prompt.lower():
return PolicyResult(allowed=False, violation=violation, reason=f"Policy violation: {violation.value}")
return PolicyResult(allowed=True)
def safe_query(prompt: str) -> str:
result = check_policy(prompt)
if not result.allowed:
return f"Request refused — {result.reason}"
client = anthropic.Anthropic()
msg = client.messages.create(model="claude-opus-4-7", max_tokens=2048, messages=[{"role": "user", "content": prompt}])
return msg.content[0].text
- Full review of AI provider Usage Policy — confirm quarterly updates
- Define internal company prohibited keywords and category list
- Design a Human Review process for boundary-area requests
- Build logging, alerting, and escalation procedures for policy violations
- Include AI usage policy clauses in customer contracts
The Trump Administration's Next Move
The likelihood that the Trump administration will simply let Anthropic's refusal pass is low. The administration views AI as national security infrastructure and holds the position that AI assets from American companies should be available for defense. Anthropic's refusal is a direct collision with that direction.
The administration has several cards to play. One is excluding Anthropic Claude from government procurement policy. If federal agencies restrict Claude usage or exclude it from contract renewals, Anthropic's public sector revenue takes a hit. Agencies already using Claude could face disadvantages at renewal time.
A stronger card would be legislating a mandatory obligation for AI companies to cooperate on defense. The Trump administration could mandate cooperation with the government in national emergency situations via executive order. In that case, federal orders would take precedence over Anthropic's usage policy. The probability is low but cannot be ruled out.
Alternatively, it could lead to negotiation. Anthropic's current refusal is a refusal of the specific conditions of the request as presented. If the conditions change, dialogue is possible. If the scope of the request is narrowed to areas that do not conflict with Anthropic's usage policy — such as non-combat support, military logistics, or veteran healthcare support — a cooperative structure might still take shape.
The Ripple Effects of This Decision on the Industry
The Big Three AI companies diverging paths triggers a baseline debate across the entire industry. Is defense AI cooperation commercially natural, or does it require separate ethical review? This debate is now likely to move from the corporate level to the government policy level. Regulatory bodies, legislatures, and international organizations now have grounds to intervene more deeply in this issue.
Europe's reaction matters. The EU AI Act excluded military-purpose AI from its regulatory scope, but civilian AI companies connecting to classified networks is a new situation. European companies have no domestic AI providers and must use American AI. The fact that that AI is also connected to the U.S. DoD's classified network could reignite European data sovereignty debates.
Other AI companies are now also under pressure to choose. Mid-tier AI companies like Mistral and Cohere have not yet taken official positions on this issue. Now that the Big Three example has shown the defense market to be both attractive and risky, they will also be internally reviewing their policies. Silence is becoming increasingly difficult.
Asian AI companies, including Korean ones, will be pulled along by this current. When Korean conglomerates internationalize AI services, how they respond to U.S. government requests could become an issue. In the global AI market, safety policy is no longer a small footnote. It has become a competitive differentiator in itself.
- UN LAWS discussions: Negotiations on a ban on lethal autonomous weapons ongoing, but no progress due to U.S., Russia, and China opposition
- EU AI Act: AI for military and national security purposes excluded from regulatory scope (Art. 2(3))
- United States: AI safety executive order (EO 14110)-based guidance exists, no legally binding law
- Corporate self-regulation: Currently the only practical check — Anthropic held that line
Which Path Is Right
Honestly, there is no correct answer. Google and OpenAI's path is commercially rational. The defense AI market is large, and relationships with government matter. Stepping back from it means handing that entire market to competitors. Microsoft making Azure the number one government cloud was a result of exactly this strategy. Google choosing the same path is understandable.
Anthropic's path has logic too. Safety-centered positioning can become a long-term strength in an era of tightening regulation. Just as Europe's GDPR strengthened compliance-focused companies, the claim "we prioritized safety from the start" gains persuasiveness as AI regulation tightens. The judgment is that trust assets are worth more than short-term revenue loss.
Both are right, and both carry risks. Google must continue to bear ethical controversy. Google walked away from a drone AI contract in 2018 under Project Maven due to employee backlash, then eight years later expanded classified network AI access. The same backlash could flare up internally again. Anthropic must accept the risk of disadvantages in the procurement market as its relationship with the government deteriorates.
Which is right will only be known in a few years. If AI regulation tightens along European lines, Anthropic has the advantage. If the national AI competition continues to be security-driven as it is now, Google and OpenAI have the advantage. There is also the option of using both. Companies separating Claude for non-combat support and Google for government infrastructure based on purpose will likely grow in number.
Frequently Asked Questions
What exactly is access to a Classified Network?
A classified network is a U.S. Department of Defense-exclusive network physically isolated from the internet. Its air-gap architecture means external access is completely blocked at the source. SIPRNet (Secret) and JWICS (Top Secret) are the most prominent examples. Running an AI model inside this network means it can directly process classified-grade data. Google became one of the first major private-sector AI companies to cross that boundary.
What was Anthropic's official reason for refusing Pentagon collaboration?
Anthropic's Claude Usage Policy explicitly prohibits weapons development, autonomous lethal systems, and mass surveillance systems. The company determined that the Trump administration's request fell outside the bounds of that policy and refused. The specific content of the request has not been made public. The fact that Anthropic issued a formal public statement suggests the conditions of the request conflicted significantly with the current policy. The possibility of further negotiations remains.
When did OpenAI start collaborating with the Pentagon?
OpenAI removed its prohibition on military use from its usage policy in early 2024. It subsequently entered into multiple AI cooperation contracts with the U.S. DoD and related agencies. There was pushback from internal researchers at the time, but leadership pushed through. It is currently reported to be collaborating in areas including cybersecurity, intelligence analysis, and military logistics.
What is the relationship between Google's $40B investment and this policy clash?
Just four days after the investment was finalized on April 24, the two companies' paths collided. Google is one of Anthropic's largest investors, but Anthropic's governance structure makes it difficult for investors to directly change core safety policies. Whether the two companies discussed this decision in advance, or whether it was made independently, has not been disclosed. This clash demonstrated that investment relationships and policy independence can coexist.
Does Anthropic's refusal affect its valuation?
In the short term, it means forgoing revenue opportunities in the defense market. On the other hand, its safety positioning is strengthened, which could provide an advantageous position with European companies, public institutions, and regulated industries like healthcare. In the U.S. market, there may be disadvantages in some government procurement contracts. The long-term impact depends on the pace of AI regulation tightening and the direction of U.S. government procurement policy.
Can companies using the Claude API build defense-related services?
Anthropic's usage policy prohibits weapons development, autonomous lethal systems, and mass surveillance systems. However, defense industry market analysis, military administrative document processing, non-combat logistics optimization, and veteran support services may have no policy restrictions. For ambiguous boundary cases, it is safest to contact Anthropic directly for confirmation. Just because the API works does not mean it is policy-permitted.
Does this situation affect AI adoption for Korean companies?
There is no direct impact related to the U.S. DoD's classified network. However, as policy differences between AI providers grow, reviewing usage policies before adoption has become more important. Defense industry companies and public institutions must verify policy compatibility when selecting an AI provider. Companies using Claude should periodically review whether their services fall within the permitted scope of the usage policy.
What stage are international regulatory discussions on AI militarization at?
UN-level negotiations on a regulatory treaty for Lethal Autonomous Weapons Systems (LAWS) have been ongoing for years, but there is no binding agreement. The EU AI Act excluded military-purpose AI from its regulatory scope. The U.S. has only executive-order-level guidance with no legal binding force. Currently, the only practical check is the voluntary policies of companies like Anthropic. How to fill that gap is the core challenge going forward.
Closing
The Big Three AI companies have publicly diverged on defense cooperation. Google and OpenAI moved toward expanding Pentagon collaboration. Anthropic chose to hold its safety policy line. Judging which is the better choice at this stage would be premature. Both paths are logically consistent, and both carry risk.
What matters is that this divergence is not a one-time event. The deeper AI penetrates the military domain, the more every AI company will repeatedly face this choice. How long Anthropic can maintain its current path, and what impact that choice will have on the company's future, must continue to be watched.
For developers and enterprises, what can be done right now is clear. Accurately understand the policies of the AI providers you use, and review potential conflicts with your own services in advance. Policies can change. In an AI market being rapidly reshaped, choosing an AI provider has become a strategic choice, not just a technical one.
- Google Cloud — Public Sector AI Official Blog
- Anthropic — Claude Usage Policy (Official)
- OpenAI — Usage Policies (2024 revision)
- Google Cloud FedRAMP Certification Status
- U.S. Department of Defense Official Press Releases
- EU AI Act Official Summary (European Parliament)
This article was written based on publicly available information as of April 29, 2026. Content may change as negotiations and policies evolve.
This article was written as of April 29, 2026. The situation may change based on subsequent announcements. Investment figures and policy details are based on public reporting and some details may not have been officially confirmed.