-
Souped-up VPNs play 'cat and mouse' game with Iran censors
-
Attacked Russian tanker drifting toward Libya: Italian authorities
-
Coroner 'not satisfied' boxer Hatton intended to take own life
-
Stocks drop, as oil rises as Mideast war persists
-
Vanishing glacier on Germany's highest peak prompts ski lift demolition
-
Chuck Norris, roundhouse-kicking action star, dead at 86: family
-
Supreme leader says Iran dealt enemies 'dizzying blow'
-
Audi team principal Wheatley in shock exit after two races
-
Spurs boss Tudor hopes for 'nice surprises' in relegation fight
-
Arsenal must prove they are winners in League Cup final, says Arteta
-
Record-breaking heat wave grips western US
-
Liverpool showdown brings back 'beautiful memories' for PSG coach Luis Enrique
-
IRA bomb victims drop civil court claim against Gerry Adams
-
Ntamack returns for Toulouse to face France rival Jalibert
-
Trump calls NATO allies 'cowards' over Iran
-
French jihadist jailed for life for Islamic State crimes against Yazidis
-
Chuck Norris, action man who inspired endless memes, dead at 86: family
-
Action movie star Chuck Norris has died: family statement
-
England stars have 'last chance' to earn World Cup spots: Tuchel
-
League Cup final a 'big moment' for Man City, says Guardiola
-
Injured Ronaldo misses Portugal World Cup friendlies
-
Liverpool condemn 'cowardly' racist abuse of Konate
-
Far from war, global fuel frustrations mount
-
German auto exports to China plunged a third in 2025: study
-
Coach Valverde to leave Bilbao at end of season
-
'Decimated'? The Iranian leaders killed in Israeli-US war
-
Mistral chief calls for European AI levy to pay creatives
-
Liverpool suffer Salah blow in chase for Champions League
-
Mahuchikh soars to world indoor high jump gold, Hodgkinson cruises
-
Spain include Joan Garcia as one of four new call-ups
-
Salah ruled out of Liverpool's Brighton clash
-
Ship crews ration food in Iran blockade: seafarers
-
Kuwait refinery hit as Iran marks New Year under shadow of war
-
England recall Mainoo, Maguire for pre-World Cup matches
-
Jerusalem's Muslims despair as war shuts Al-Aqsa Mosque for Eid
-
'War has aged us': Lebanon's kids aren't alright
-
Snooker great O'Sullivan makes history with highest-ever break
-
Kuwait refinery hit as Iran says missile production 'no concern'
-
India to tackle global obesity with cheap fat-loss jabs
-
Somaliland centre saves cheetahs from trafficking to Gulf palaces
-
China swim sensation Yu, 13, beats multiple Olympic medallist
-
North Korean leader, daughter try out new tank
-
Israel strikes 'decimated' Iran as war roils markets
-
James ties NBA record for most regular-season games in latest milestone
-
Trump's Mideast muddle could play into Xi's hands at planned summit
-
Wembanyama lifts playoff-bound Spurs, Doncic and James fuel Lakers
-
Japan ski paradise faces strains of global acclaim
-
Vinicius, Real Madrid must prove consistency in Atletico derby
-
Kane credits Kompany's Bayern 'evolution' as treble beckons
-
PSG look back to their best, but not yet out of sight in Ligue 1
Generative AI's most prominent skeptic doubles down
Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.
Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.
Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.
"Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.
Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.
The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.
"I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."
His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.
Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.
That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.
Yet for all the hype, the practical gains remain limited.
The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.
Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.
"One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.
This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."
- 'Right answers matter' -
Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.
He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."
This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.
Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.
Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."
Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.
"The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.
"They have all this private data, so they can sell that as a consolation prize."
Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.
"They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.
"But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."
P.Anderson--BTB