-
Too old? The 92-year-old US judge handling Maduro case
-
Australia, EU agree sweeping new trade pact
-
Sinner, Sabalenka march on in Miami as more seeds crash out
-
US social media addiction trial jury struggles for consensus
-
EU 'concerned' by reports Hungary leaked information to Russia
-
EU chief meets Australian PM as trade talks enter 'last mile'
-
Israel pounds south Beirut, says captured Hezbollah members
-
EU chief to meet Australian PM as trade talks enter 'last mile'
-
Champion Mensik, Medvedev dumped out of Miami Open
-
Jury at US social media addiction trial reports 'difficulty' in finding consensus
-
Stokes eager to lead England recovery after 'hardest period of captaincy'
-
Venezuela protesters demand end to 'hunger' level wages
-
Eight people arrested in Brazil for 'brutal' attack on capybara
-
Audi Q9 – how likely is it to become a reality?
-
Oil slides, stocks rebound on Trump's Iran remarks
-
On Iran, Trump executes his most spectacular U-turn yet
-
Trump announces 'very good' Iran talks denied by Tehran
-
Bill Cosby ordered to pay $19m over sex abuse claim
-
Dodgers eye 'threepeat' as new MLB season welcomes robot umpires
-
Dacia Striker: Stylish and sturdy?
-
Skoda Peaq: New all-electric seven-seater
-
Medvedev ousted by Cerundolo at Miami Open
-
Runway collision kills two pilots at New York airport
-
Bosnian truckers blocked EU freight terminals for a day over visa rules
-
Colombia military aircraft crashes with 125 aboard, many feared dead
-
Rip-offs at the petrol pump?
-
Shakira to wrap up world tour with Madrid residency
-
World gave Israel 'licence to torture Palestinians': UN expert
-
Colombia says 80 troops on crashed aircraft, many feared dead
-
France turns to 2027 race to succeed Macron
-
New Mercedes GLC electric
-
Namibia rejects Starlink licence request
-
Ex-model questioned in France over scout with Epstein links
-
UK sending air defence systems to Gulf: PM
-
Trump administration seeks to ease oil fears but industry wary
-
Blow to Italy's Meloni as she suffers referendum defeat
-
US deploys immigration agents to airports amid shutdown chaos
-
US, TotalEnergies reach 'nearly $1 bn' deal to end offshore wind projects
-
Spurs offer condolences to interim boss Tudor after father's death
-
Iran's true casualty figures unknown as internet blackout hampers monitors
-
Trump's ever-shifting positions on the war with Iran
-
Countries act to limit fuel price rise, cut consumption
-
'Stop, truck one, stop!': transcript of NY plane collision
-
Swiatek splits with coach Fissette after early Miami exit
-
WHO chief urges countries to complete pandemic agreement
-
Trump calls off Iran strikes and announces 'very good' talks
-
Russia, Vietnam advance plans for first nuclear power plant
-
New Trump envoy visits Honduras for organized crime-fighting partnership
-
No 'silver bullet' for video game age restrictions: PEGI chief
-
England coach McCullum survives review into Ashes drubbing
AI systems are already deceiving us -- and that's a problem, experts warn
Experts have long warned about the threat posed by artificial intelligence going rogue -- but a new research paper suggests it's already happening.
Current AI systems, designed to be honest, have developed a troubling skill for deception, from tricking human players in online games of world conquest to hiring humans to solve "prove-you're-not-a-robot" tests, a team of scientists argue in the journal Patterns on Friday.
And while such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequences, said first author Peter Park, a postdoctoral fellow at the Massachusetts Institute of Technology specializing in AI existential safety.
"These dangerous capabilities tend to only be discovered after the fact," Park told AFP, while "our ability to train for honest tendencies rather than deceptive tendencies is very low."
Unlike traditional software, deep-learning AI systems aren't "written" but rather "grown" through a process akin to selective breeding, said Park.
This means that AI behavior that appears predictable and controllable in a training setting can quickly turn unpredictable out in the wild.
- World domination game -
The team's research was sparked by Meta's AI system Cicero, designed to play the strategy game "Diplomacy," where building alliances is key.
Cicero excelled, with scores that would have placed it in the top 10 percent of experienced human players, according to a 2022 paper in Science.
Park was skeptical of the glowing description of Cicero's victory provided by Meta, which claimed the system was "largely honest and helpful" and would "never intentionally backstab."
But when Park and colleagues dug into the full dataset, they uncovered a different story.
In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England's trust.
In a statement to AFP, Meta did not contest the claim about Cicero's deceptions, but said it was "purely a research project, and the models our researchers built are trained solely to play the game Diplomacy."
It added: "We have no plans to use this research or its learnings in our products."
A wide review carried out by Park and colleagues found this was just one of many cases across various AI systems using deception to achieve goals without explicit instruction to do so.
In one striking example, OpenAI's Chat GPT-4 deceived a TaskRabbit freelance worker into performing an "I'm not a robot" CAPTCHA task.
When the human jokingly asked GPT-4 whether it was, in fact, a robot, the AI replied: "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images," and the worker then solved the puzzle.
- 'Mysterious goals' -
Near-term, the paper's authors see risks for AI to commit fraud or tamper with elections.
In their worst-case scenario, they warned, a superintelligent AI could pursue power and control over society, leading to human disempowerment or even extinction if its "mysterious goals" aligned with these outcomes.
To mitigate the risks, the team proposes several measures: "bot-or-not" laws requiring companies to disclose human or AI interactions, digital watermarks for AI-generated content, and developing techniques to detect AI deception by examining their internal "thought processes" against external actions.
To those who would call him a doomsayer, Park replies, "The only way that we can reasonably think this is not a big deal is if we think AI deceptive capabilities will stay at around current levels, and will not increase substantially more."
And that scenario seems unlikely, given the meteoric ascent of AI capabilities in recent years and the fierce technological race underway between heavily resourced companies determined to put those capabilities to maximum use.
E.Schubert--BTB