-
Sheep culls put pressure on Greek feta cheese production
-
One man, his dog, and ChatGPT: Australia's AI vaccine saga
-
Israel PM restores access after Latin Patriarch blocked from Holy Sepulchre
-
Israel strikes Tehran as Trump says Iran deal may be reached 'soon'
-
Italy chase World Cup spot as Kosovo bid to make debut
-
Myanmar paves way for junta chief to become civilian president
-
'Long live the shah': Iranian diaspora back war at Washington rally
-
Taiwan opposition leader accepts Xi's invitation to visit China
-
French masonic lodge at heart of murky murder trial
-
US military building 'massive complex' beneath White House ballroom project: Trump
-
IPL captain takes pop at Cricket Australia over record-buy Green
-
G7 ministers set to tackle financial fallout of Mideast war
-
Premier League fans feel the pinch from ticket price hikes
-
Australia to halve fuel tax in response to Middle East war
-
Crude surges, stocks dive as Houthi attacks escalate Iran war
-
Air China resumes flights to North Korea after 6-year pause
-
NBA-best Thunder beat Knicks as Boston seal playoff spot
-
Australian fugitive shot dead by police after seven-month manhunt
-
King Kimi, Max misery, Bearman smash: Japan GP talking points
-
Philippines oil refinery secures 2.5 mn barrels of Russian crude
-
Trump says Russia can deliver oil to Cuba
-
All Blacks prop Williams out of Super Rugby season with back infection
-
Life with AI causing human brain 'fry'
-
Dubious AI detectors drive 'pay-to-humanize' scam
-
Test star Carey the hero as South Australia win Sheffield Shield final
-
Defending champ Kim Hyo-joo holds off Korda to win LPGA Ford Championship
-
Implacable Sinner overpowers Lehecka to win Miami Open
-
Australian police shoot dead fugitive wanted for killing officers
-
UK police question suspect after car hits pedestrians in English city
-
World number two Sinner overpowers Lehecka to win Miami Open
-
Latin Patriarch to get immediate access to Holy Sepulchre: Netanyahu
-
Russian tanker heads to Cuba despite US oil blockade
-
Woodland takes Houston Open, first win since 2019 US Open
-
Italy's Bezzecchi wins fifth MotoGP in a row by taking US Grand Prix
-
Doue brace leads France past Colombia in friendly
-
Rheinmetall addresses row over CEO's Ukraine 'housewives' comment
-
Hungary's anxious rural voters will decide Orban's fate
-
Defiant Pochettino ready for 'even greater' Portugal test
-
Rohit and Rickelton power Mumbai to IPL win over Kolkata
-
Russian tanker nears Cuba, defying US oil blockade
-
'Project Hail Mary' tops N. America box office for second week
-
Forty new migratory species win international protection: UN body
-
Freed whale gets stranded again on German coast
-
Ter Stegen's World Cup chances 'very slim', says Nagelsmann
-
Pakistan hosts Saudi, Turkey, Egypt for talks on Mideast war
-
Tudor leaves after just seven games as Spurs battle for survival
-
Philipsen sprints to In Flanders Fields victory
-
In Israel, air raid sirens spark anxiety and dilemmas
-
Iran accuses US of plotting ground attack despite diplomatic talk
-
Vingegaard clinches Tour of Catalonia victory
ChatGPT to get parental controls after teen's death
American artificial intelligence firm OpenAI said Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenaged son to kill himself.
"Within the next month, parents will be able to... link their account with their teen's account" and "control how ChatGPT responds to their teen with age-appropriate model behavior rules," the generative AI company said in a blog post.
Parents will also receive notifications from ChatGPT "when the system detects their teen is in a moment of acute distress," OpenAI added.
The company had trailed a system of parental controls in a late August blog post.
That came one day after a court filing from California parents Matthew and Maria Raine, alleging that ChatGPT provided their 16-year-old son with detailed suicide instructions and encouraged him to put his plans into action.
The Raines' case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots -- prompting OpenAI to say it would reduce models' "sycophancy" towards users.
"We continue to improve how our models recognize and respond to signs of mental and emotional distress," OpenAI said Tuesday.
The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting "some sensitive conversations... to a reasoning model" that puts more computing power into generating a response.
"Our testing shows that reasoning models more consistently follow and apply safety guidelines," OpenAI said.
S.Keller--BTB