-
King Charles arrives in Bermuda after whirlwind US visit
-
Clashes erupt in Australian town over death of Indigenous girl
-
Iran war redraws sea routes with Africa as the pivot
-
India's cows offer biogas alternative to Mideast energy crunch
-
Afghans celebrate spring in bright red poppy fields
-
Finland's 'Flamethrower' and 4 other Eurovision favourites
-
Crude edges up after wild swing, stocks track Wall St rally
-
Eurovision: 70 years of geopolitics, patriotism, music and glitter
-
Knicks demolish Hawks to advance in NBA playoffs
-
Blockbuster EU-Mercosur trade deal enters into force
-
'Uncharted': US court ruling shakes up battle for Congress
-
Florida executes man who spent nearly 50 years on death row
-
Ace lifts rookie Green to share of LPGA lead as Korda lurks
-
Wear a bulletproof vest? I don't want to look fat, says Trump
-
World No. 4 Young leads at PGA Cadillac Championship
-
FIFA to review ticket strategy for 2030 World Cup
-
Bucks hire ex-Grizzlies coach Jenkins
-
Japanese tennis trailblazer Nishikori to retire at end of season
-
Palestinian football chief slams Israeli official at FIFA meeting
-
Britney Spears formally charged with DUI in California
-
Rayo grab lead over Strasbourg in Conference League semi
-
New Princess Diana documentary promises her own words
-
Villa boss Emery fumes as Forest star Anderson escapes red card
-
Oil slumps after hitting peak, US indices reach new records
-
Trump says lifting Scottish whisky tariffs to 'honor' King Charles
-
Venezuela leader hikes minimum wage package by 26%
-
PGA Tour golfers take wait-and-see approach amid LIV turmoil
-
Braga strike late to seize advantage over Freiburg in Europa League semi
-
Miami GP could be moved up as thunderstorms threaten - drivers
-
Apple earnings beat forecasts on iPhone 17 demand
-
Crystal Palace beat Shakhtar to close in on Conference League final
-
Wood punishes Digne blunder as Forest earn Europa semi-final lead against Villa
-
Formula One drivers welcome rule tweaks, but say more change needed
-
Bangladesh signs biggest-ever plane deal for 14 Boeings
-
Musk grilled on AI profits at OpenAI trial
-
Venezuela opens arms to world with Miami-Caracas flight
-
King Charles experiences small-town America on last day of visit
-
Trump mulls US troop cuts in Italy, Spain over Iran row
-
Israel says detained Gaza flotilla activists to be taken to Greece
-
Infantino confirms Iran will play World Cup games in US
-
Blow for Lula as Brazil MPs slash Bolsonaro prison term
-
At Iranian film's Berlin premiere, calls not to forget Iranian people
-
Honda confident Aston Martin power unit problems solved
-
Abuse of retired Bright 'too much', says Chelsea's Bompastor
-
US sanctions DR Congo ex-leader Kabila over rebel ties
-
Jury of Italy's Venice Biennale resigns over Russia row
-
FIFA chief Infantino confirms Iran playing in US at World Cup
-
Early favorite Renegade faces tough Kentucky Derby draw
-
Routine returns but Iranians struggle to afford daily life
-
Gill, Buttler guide Gujarat to comfortable win over Bengaluru
Anthropic sues Trump admin over Pentagon blacklisting
Anthropic filed suit Monday against the Trump administration, alleging the US government retaliated against the company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans.
In the 48-page complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked.
In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible."
"Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says.
Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei.
The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department.
"The consequences of this case are enormous," the lawsuit states, with the government "seeking to destroy the economic value created by one of the world's fastest-growing private companies."
The suit names more than a dozen federal agencies and cabinet officials as defendants.
The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology.
Hours later, Hegseth designated Anthropic a "Supply-Chain Risk to National Security" and ordered that no military contractor, supplier or partner "may conduct any commercial activity with Anthropic," while allowing a six-month transition period for the Pentagon itself.
The row erupted days before the US military strike on Iran. Claude is the Pentagon's most widely deployed frontier AI model and the only such model currently operating on the Defense Department's classified systems.
- Arbitrary? -
In its lawsuit, Anthropic argues the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon's statutory authority, and deprive it of due process under the Fifth Amendment.
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the complaint states.
More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday.
Saying they were expressing their opinions as professionals who build, train or study AI and did not represent their companies, they urged the court to side with Anthropic.
"We are united in the conviction that today's frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions," they said in the brief.
Current AI models are not reliable enough to be trusted with making lethal targeting decisions, and putting powerful AI together with all the data available about people threatens to change the fabric of public life in this county, the filing argued.
"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry",the brief contended.
Founded in 2021 by siblings Dario and Daniela Amodei, both former staffers at ChatGPT-maker OpenAI, Anthropic has positioned itself as a safety-focused alternative in the AI race.
C.Kovalenko--BTB