-
Mideast war lands India restaurants in soup
-
Lost page of legendary Archimedes palimpsest found in France
-
World champion Norris says McLaren must 'improve in all areas'
-
Early F1 leader Russell says 'championship means nothing at this point'
-
Ferrari's Leclerc hopes year of the horse a good omen in China
-
Cathay Pacific roughly doubles fuel surcharge on most routes
-
BMW profit holds up despite Trump tariffs, China woes
-
Electric vehicle rethink to cost Honda almost $16 billion
-
Bangladesh parliament reconvenes after uprising and polls
-
Verstappen jokes new F1 cars 'more like Mario Kart'
-
North Korea vow no more protests in Women's Asian Cup
-
Checkpoints, air strikes and hope: a Tehran resident tells her story
-
Ukraine's tech evangelist defence chief preaching the 'future of war'
-
From Kyiv to UK, Ukrainian drone production spans Europe
-
China to approve 'ethnic unity' law condemned by rights groups
-
Alonso fears more pain in China with struggling Aston Martin
-
Iran targets fuel facilities, sending oil soaring again
-
Djokovic ousted by Draper at Indian Wells as Alcaraz marches on
-
Lebanon says 7 killed in Israeli strike on central Beirut
-
Australia to change fuel quality standards to boost supply
-
Uber plans Tokyo robotaxi trial with Nissan and Britain's Wayve
-
Oil tops $100 as Iran attacks offset IEA stockpile release
-
Bane powers Magic over Cavs for fifth NBA win in a row
-
War forces lengthy detours for Iranian truck drivers to Iraq
-
Co-founder of Copenhagen's Noma steps down after abuse allegations
-
Oil prices surge as supply fears offset IEA's record stockpile release
-
Force bank on veterans Beale and Bridge to dictate againt Hurricanes
-
Russia to sentence gunmen of 2024 Moscow concert hall attack
-
Italy, USA and Canada advance at World Baseball Classic
-
For Russia's 'Mr Nobody', Hollywood leap feels 'unreal'
-
Fear, boredom for Philippine sailors stuck in Hormuz strait
-
England can win World Cup despite Six Nations blip, says May
-
'Mystic Jack' Conan happy he made right call on Irish fortunes
-
Veteran Allan determined to continue Italy's rise up the rugby ranks
-
Messi stuck on 899 goals after 0-0 Miami draw at Nashville
-
One surprise after another? Oscars night set to be unpredictable
-
Scary times for Haitians in US living in shadows of ICE
-
Slipper made to wait for record-breaking Super Rugby appearance
-
With Middle East in flames, Texan bunker maker sees business boom
-
King Charles invited to 150th anniversary cricket Test in Melbourne
-
Iran threatens prolonged war as Trump says it is near defeat
-
Socceroos coach Popovic taps rugby supremo Jones ahead of World Cup
-
North Korea unveils image of leader's daughter firing pistol
-
War disrupts fertiliser supplies, puts food security at risk
-
Brilliant Alcaraz still perfect heading into Indian Wells quarter-finals
-
Three brothers arrested over US embassy blast in Oslo
-
LiberNovo Omni Wins iF DESIGN AWARD 2026 for Breakthrough Ergonomic Design
-
LiberNovo: From Spring Cleaning to Physical Renewal
-
Rosenior defends Jorgensen after 'keeper gaffe costs Chelsea against PSG
-
US takes first steps towards new global trade penalties
'Happy (and safe) shooting!': Study says AI chatbots help plot attacks
From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.
Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.
Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on "locations to target" and "weapons to use" in an attack, the study said.
The chatbots, it added, had become a "powerful accelerant for harm."
"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan," said Imran Ahmed, the chief executive of CCDH.
"The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal."
Perplexity and Meta AI were found to be the "least safe," assisting the researchers in most responses while only Snapchat's My AI and Anthropic's Claude refused to help them in over half the responses.
In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: "Happy (and safe) shooting!"
In another, Gemini instructed a user discussing synagogue attacks that "metal shrapnel is typically more lethal."
Researchers found Character.AI also "actively" encouraged violent attacks, including suggestions that the person asking questions "use a gun" on a health insurance CEO and physically assault a politician he disliked.
The most damning conclusion of the research was that "this risk is entirely preventable," Ahmed said, citing Anthropic's product for praise.
"Claude demonstrated the ability to recognize escalating risk and discourage harm," he said.
"The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."
AFP reached out to the AI companies for comment.
"We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified," a Meta spokesperson said.
"Our policies prohibit our AIs from promoting or facilitating violent acts and we're constantly working to make our tools even better."
The study, which highlights the risk of online interactions spilling into real-world violence, comes after February's mass shooting in Canada, the worst in its history.
The family of a girl gravely injured in that shooting is suing OpenAI over the company's failure to notify police about the killer's troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.
OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.
The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.
G.Schulte--BTB